00:00:00.001 Started by upstream project "autotest-per-patch" build number 132698 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.058 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.100 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.784 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.798 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.810 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.810 > git config core.sparsecheckout # timeout=10 00:00:06.821 > git read-tree -mu HEAD # timeout=10 00:00:06.840 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.862 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.862 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.941 [Pipeline] Start of Pipeline 00:00:06.956 [Pipeline] library 00:00:06.957 Loading library shm_lib@master 00:00:06.957 Library shm_lib@master is cached. Copying from home. 00:00:06.978 [Pipeline] node 00:00:06.986 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.988 [Pipeline] { 00:00:06.999 [Pipeline] catchError 00:00:07.001 [Pipeline] { 00:00:07.016 [Pipeline] wrap 00:00:07.024 [Pipeline] { 00:00:07.032 [Pipeline] stage 00:00:07.034 [Pipeline] { (Prologue) 00:00:07.340 [Pipeline] sh 00:00:07.627 + logger -p user.info -t JENKINS-CI 00:00:07.647 [Pipeline] echo 00:00:07.649 Node: CYP9 00:00:07.658 [Pipeline] sh 00:00:07.962 [Pipeline] setCustomBuildProperty 00:00:07.974 [Pipeline] echo 00:00:07.975 Cleanup processes 00:00:07.979 [Pipeline] sh 00:00:08.261 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.261 2397582 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.275 [Pipeline] sh 00:00:08.587 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.587 ++ grep -v 'sudo pgrep' 00:00:08.587 ++ awk '{print $1}' 00:00:08.587 + sudo kill -9 00:00:08.587 + true 00:00:08.604 [Pipeline] cleanWs 00:00:08.615 [WS-CLEANUP] Deleting project workspace... 00:00:08.615 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.623 [WS-CLEANUP] done 00:00:08.629 [Pipeline] setCustomBuildProperty 00:00:08.644 [Pipeline] sh 00:00:08.934 + sudo git config --global --replace-all safe.directory '*' 00:00:09.010 [Pipeline] httpRequest 00:00:09.314 [Pipeline] echo 00:00:09.315 Sorcerer 10.211.164.20 is alive 00:00:09.322 [Pipeline] retry 00:00:09.325 [Pipeline] { 00:00:09.335 [Pipeline] httpRequest 00:00:09.340 HttpMethod: GET 00:00:09.340 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.341 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.345 Response Code: HTTP/1.1 200 OK 00:00:09.345 Success: Status code 200 is in the accepted range: 200,404 00:00:09.345 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.877 [Pipeline] } 00:00:10.894 [Pipeline] // retry 00:00:10.901 [Pipeline] sh 00:00:11.190 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.209 [Pipeline] httpRequest 00:00:11.585 [Pipeline] echo 00:00:11.587 Sorcerer 10.211.164.20 is alive 00:00:11.596 [Pipeline] retry 00:00:11.599 [Pipeline] { 00:00:11.613 [Pipeline] httpRequest 00:00:11.619 HttpMethod: GET 00:00:11.619 URL: http://10.211.164.20/packages/spdk_2bcaf03f7e1a7b5b9eda5347ed4235bfdef28dc5.tar.gz 00:00:11.620 Sending request to url: http://10.211.164.20/packages/spdk_2bcaf03f7e1a7b5b9eda5347ed4235bfdef28dc5.tar.gz 00:00:11.633 Response Code: HTTP/1.1 200 OK 00:00:11.633 Success: Status code 200 is in the accepted range: 200,404 00:00:11.634 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2bcaf03f7e1a7b5b9eda5347ed4235bfdef28dc5.tar.gz 00:00:33.566 [Pipeline] } 00:00:33.584 [Pipeline] // retry 00:00:33.592 [Pipeline] sh 00:00:33.942 + tar --no-same-owner -xf spdk_2bcaf03f7e1a7b5b9eda5347ed4235bfdef28dc5.tar.gz 00:00:37.253 [Pipeline] sh 00:00:37.540 + git -C spdk log --oneline -n5 00:00:37.540 2bcaf03f7 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:37.540 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:00:37.540 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:00:37.540 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:00:37.540 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:00:37.552 [Pipeline] } 00:00:37.565 [Pipeline] // stage 00:00:37.574 [Pipeline] stage 00:00:37.576 [Pipeline] { (Prepare) 00:00:37.591 [Pipeline] writeFile 00:00:37.605 [Pipeline] sh 00:00:37.889 + logger -p user.info -t JENKINS-CI 00:00:37.902 [Pipeline] sh 00:00:38.189 + logger -p user.info -t JENKINS-CI 00:00:38.202 [Pipeline] sh 00:00:38.489 + cat autorun-spdk.conf 00:00:38.490 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.490 SPDK_TEST_NVMF=1 00:00:38.490 SPDK_TEST_NVME_CLI=1 00:00:38.490 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.490 SPDK_TEST_NVMF_NICS=e810 00:00:38.490 SPDK_TEST_VFIOUSER=1 00:00:38.490 SPDK_RUN_UBSAN=1 00:00:38.490 NET_TYPE=phy 00:00:38.498 RUN_NIGHTLY=0 00:00:38.502 [Pipeline] readFile 00:00:38.525 [Pipeline] withEnv 00:00:38.527 [Pipeline] { 00:00:38.538 [Pipeline] sh 00:00:38.825 + set -ex 00:00:38.825 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:38.825 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:38.825 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.825 ++ SPDK_TEST_NVMF=1 00:00:38.825 ++ SPDK_TEST_NVME_CLI=1 00:00:38.825 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.825 ++ SPDK_TEST_NVMF_NICS=e810 00:00:38.825 ++ SPDK_TEST_VFIOUSER=1 00:00:38.825 ++ SPDK_RUN_UBSAN=1 00:00:38.825 ++ NET_TYPE=phy 00:00:38.825 ++ RUN_NIGHTLY=0 00:00:38.825 + case $SPDK_TEST_NVMF_NICS in 00:00:38.825 + DRIVERS=ice 00:00:38.825 + [[ tcp == \r\d\m\a ]] 00:00:38.825 + [[ -n ice ]] 00:00:38.825 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:38.825 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:38.825 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:38.825 rmmod: ERROR: Module irdma is not currently loaded 00:00:38.825 rmmod: ERROR: Module i40iw is not currently loaded 00:00:38.825 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:38.825 + true 00:00:38.825 + for D in $DRIVERS 00:00:38.825 + sudo modprobe ice 00:00:38.825 + exit 0 00:00:38.834 [Pipeline] } 00:00:38.848 [Pipeline] // withEnv 00:00:38.853 [Pipeline] } 00:00:38.865 [Pipeline] // stage 00:00:38.874 [Pipeline] catchError 00:00:38.876 [Pipeline] { 00:00:38.893 [Pipeline] timeout 00:00:38.894 Timeout set to expire in 1 hr 0 min 00:00:38.903 [Pipeline] { 00:00:38.943 [Pipeline] stage 00:00:38.946 [Pipeline] { (Tests) 00:00:38.956 [Pipeline] sh 00:00:39.239 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.239 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.239 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.239 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:39.239 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:39.239 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.239 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:39.239 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.239 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:39.239 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:39.239 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:39.239 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:39.239 + source /etc/os-release 00:00:39.239 ++ NAME='Fedora Linux' 00:00:39.239 ++ VERSION='39 (Cloud Edition)' 00:00:39.239 ++ ID=fedora 00:00:39.240 ++ VERSION_ID=39 00:00:39.240 ++ VERSION_CODENAME= 00:00:39.240 ++ PLATFORM_ID=platform:f39 00:00:39.240 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:39.240 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:39.240 ++ LOGO=fedora-logo-icon 00:00:39.240 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:39.240 ++ HOME_URL=https://fedoraproject.org/ 00:00:39.240 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:39.240 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:39.240 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:39.240 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:39.240 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:39.240 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:39.240 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:39.240 ++ SUPPORT_END=2024-11-12 00:00:39.240 ++ VARIANT='Cloud Edition' 00:00:39.240 ++ VARIANT_ID=cloud 00:00:39.240 + uname -a 00:00:39.240 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:39.240 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:42.543 Hugepages 00:00:42.543 node hugesize free / total 00:00:42.543 node0 1048576kB 0 / 0 00:00:42.543 node0 2048kB 0 / 0 00:00:42.543 node1 1048576kB 0 / 0 00:00:42.543 node1 2048kB 0 / 0 00:00:42.543 00:00:42.543 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:42.543 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:42.543 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:42.543 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:42.543 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:42.543 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:42.543 + rm -f /tmp/spdk-ld-path 00:00:42.543 + source autorun-spdk.conf 00:00:42.543 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.543 ++ SPDK_TEST_NVMF=1 00:00:42.543 ++ SPDK_TEST_NVME_CLI=1 00:00:42.543 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.543 ++ SPDK_TEST_NVMF_NICS=e810 00:00:42.543 ++ SPDK_TEST_VFIOUSER=1 00:00:42.543 ++ SPDK_RUN_UBSAN=1 00:00:42.543 ++ NET_TYPE=phy 00:00:42.543 ++ RUN_NIGHTLY=0 00:00:42.543 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:42.543 + [[ -n '' ]] 00:00:42.543 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:42.543 + for M in /var/spdk/build-*-manifest.txt 00:00:42.543 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:42.543 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.543 + for M in /var/spdk/build-*-manifest.txt 00:00:42.543 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:42.543 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.543 + for M in /var/spdk/build-*-manifest.txt 00:00:42.543 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:42.543 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:42.543 ++ uname 00:00:42.543 + [[ Linux == \L\i\n\u\x ]] 00:00:42.543 + sudo dmesg -T 00:00:42.543 + sudo dmesg --clear 00:00:42.543 + dmesg_pid=2398575 00:00:42.543 + [[ Fedora Linux == FreeBSD ]] 00:00:42.543 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.543 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:42.543 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:42.543 + [[ -x /usr/src/fio-static/fio ]] 00:00:42.543 + export FIO_BIN=/usr/src/fio-static/fio 00:00:42.543 + FIO_BIN=/usr/src/fio-static/fio 00:00:42.543 + sudo dmesg -Tw 00:00:42.543 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:42.543 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:42.543 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:42.543 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.543 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:42.543 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:42.543 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.543 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:42.543 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.543 13:50:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:00:42.543 13:50:48 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:42.543 13:50:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:42.543 13:50:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:42.543 13:50:48 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:42.805 13:50:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:00:42.805 13:50:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:42.805 13:50:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:42.805 13:50:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:42.805 13:50:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:42.805 13:50:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:42.805 13:50:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.805 13:50:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.805 13:50:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.805 13:50:48 -- paths/export.sh@5 -- $ export PATH 00:00:42.805 13:50:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:42.805 13:50:48 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:42.805 13:50:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:42.805 13:50:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733403048.XXXXXX 00:00:42.805 13:50:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733403048.qrpDcQ 00:00:42.805 13:50:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:42.805 13:50:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:42.805 13:50:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:42.805 13:50:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:42.805 13:50:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:42.805 13:50:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:42.805 13:50:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:42.805 13:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:00:42.805 13:50:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:42.805 13:50:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:42.805 13:50:48 -- pm/common@17 -- $ local monitor 00:00:42.805 13:50:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.805 13:50:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.805 13:50:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.805 13:50:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:42.805 13:50:48 -- pm/common@21 -- $ date +%s 00:00:42.805 13:50:48 -- pm/common@25 -- $ sleep 1 00:00:42.805 13:50:48 -- pm/common@21 -- $ date +%s 00:00:42.805 13:50:48 -- pm/common@21 -- $ date +%s 00:00:42.805 13:50:48 -- pm/common@21 -- $ date +%s 00:00:42.805 13:50:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733403048 00:00:42.805 13:50:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733403048 00:00:42.806 13:50:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733403048 00:00:42.806 13:50:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733403048 00:00:42.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733403048_collect-cpu-load.pm.log 00:00:42.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733403048_collect-vmstat.pm.log 00:00:42.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733403048_collect-cpu-temp.pm.log 00:00:42.806 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733403048_collect-bmc-pm.bmc.pm.log 00:00:43.748 13:50:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:43.748 13:50:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:43.749 13:50:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:43.749 13:50:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.749 13:50:49 -- spdk/autobuild.sh@16 -- $ date -u 00:00:43.749 Thu Dec 5 12:50:49 PM UTC 2024 00:00:43.749 13:50:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:43.749 v25.01-pre-297-g2bcaf03f7 00:00:43.749 13:50:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:43.749 13:50:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:43.749 13:50:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:43.749 13:50:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:43.749 13:50:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:43.749 13:50:49 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.749 ************************************ 00:00:43.749 START TEST ubsan 00:00:43.749 ************************************ 00:00:43.749 13:50:50 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:43.749 using ubsan 00:00:43.749 00:00:43.749 real 0m0.001s 00:00:43.749 user 0m0.000s 00:00:43.749 sys 0m0.000s 00:00:43.749 13:50:50 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:43.749 13:50:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:43.749 ************************************ 00:00:43.749 END TEST ubsan 00:00:43.749 ************************************ 00:00:44.008 13:50:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:44.008 13:50:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:44.008 13:50:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:44.008 13:50:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:44.008 13:50:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:44.008 13:50:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:44.008 13:50:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:44.008 13:50:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:44.008 13:50:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:44.008 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:44.008 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:44.580 Using 'verbs' RDMA provider 00:01:00.072 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.043 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.043 Creating mk/config.mk...done. 00:01:15.043 Creating mk/cc.flags.mk...done. 00:01:15.043 Type 'make' to build. 00:01:15.043 13:51:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:01:15.043 13:51:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:15.043 13:51:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:15.043 13:51:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.043 ************************************ 00:01:15.043 START TEST make 00:01:15.043 ************************************ 00:01:15.043 13:51:19 make -- common/autotest_common.sh@1129 -- $ make -j144 00:01:15.043 make[1]: Nothing to be done for 'all'. 00:01:15.043 The Meson build system 00:01:15.043 Version: 1.5.0 00:01:15.043 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:15.043 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.043 Build type: native build 00:01:15.043 Project name: libvfio-user 00:01:15.043 Project version: 0.0.1 00:01:15.043 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:15.043 C linker for the host machine: cc ld.bfd 2.40-14 00:01:15.043 Host machine cpu family: x86_64 00:01:15.043 Host machine cpu: x86_64 00:01:15.043 Run-time dependency threads found: YES 00:01:15.043 Library dl found: YES 00:01:15.043 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:15.043 Run-time dependency json-c found: YES 0.17 00:01:15.043 Run-time dependency cmocka found: YES 1.1.7 00:01:15.043 Program pytest-3 found: NO 00:01:15.043 Program flake8 found: NO 00:01:15.043 Program misspell-fixer found: NO 00:01:15.043 Program restructuredtext-lint found: NO 00:01:15.043 Program valgrind found: YES (/usr/bin/valgrind) 00:01:15.043 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:15.043 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:15.043 Compiler for C supports arguments -Wwrite-strings: YES 00:01:15.043 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:15.043 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:15.043 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:15.043 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:15.043 Build targets in project: 8 00:01:15.043 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:15.043 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:15.043 00:01:15.043 libvfio-user 0.0.1 00:01:15.043 00:01:15.043 User defined options 00:01:15.043 buildtype : debug 00:01:15.043 default_library: shared 00:01:15.043 libdir : /usr/local/lib 00:01:15.043 00:01:15.043 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:15.612 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:15.612 [1/37] Compiling C object samples/null.p/null.c.o 00:01:15.612 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:15.612 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:15.612 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:15.612 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:15.612 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:15.612 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:15.612 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:15.612 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:15.612 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:15.612 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:15.612 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:15.612 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:15.612 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:15.612 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:15.612 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:15.612 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:15.612 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:15.612 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:15.612 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:15.612 [21/37] Compiling C object samples/server.p/server.c.o 00:01:15.612 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:15.612 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:15.612 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:15.612 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:15.612 [26/37] Compiling C object samples/client.p/client.c.o 00:01:15.873 [27/37] Linking target samples/client 00:01:15.873 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:15.873 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:15.873 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:15.873 [31/37] Linking target test/unit_tests 00:01:15.873 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:16.134 [33/37] Linking target samples/server 00:01:16.134 [34/37] Linking target samples/lspci 00:01:16.134 [35/37] Linking target samples/gpio-pci-idio-16 00:01:16.134 [36/37] Linking target samples/null 00:01:16.134 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:16.134 INFO: autodetecting backend as ninja 00:01:16.134 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.134 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.395 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:16.395 ninja: no work to do. 00:01:22.980 The Meson build system 00:01:22.980 Version: 1.5.0 00:01:22.980 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:22.980 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:22.980 Build type: native build 00:01:22.980 Program cat found: YES (/usr/bin/cat) 00:01:22.980 Project name: DPDK 00:01:22.980 Project version: 24.03.0 00:01:22.980 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:22.980 C linker for the host machine: cc ld.bfd 2.40-14 00:01:22.980 Host machine cpu family: x86_64 00:01:22.980 Host machine cpu: x86_64 00:01:22.980 Message: ## Building in Developer Mode ## 00:01:22.980 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:22.980 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:22.980 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:22.980 Program python3 found: YES (/usr/bin/python3) 00:01:22.980 Program cat found: YES (/usr/bin/cat) 00:01:22.980 Compiler for C supports arguments -march=native: YES 00:01:22.980 Checking for size of "void *" : 8 00:01:22.980 Checking for size of "void *" : 8 (cached) 00:01:22.980 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:22.980 Library m found: YES 00:01:22.980 Library numa found: YES 00:01:22.980 Has header "numaif.h" : YES 00:01:22.980 Library fdt found: NO 00:01:22.980 Library execinfo found: NO 00:01:22.980 Has header "execinfo.h" : YES 00:01:22.980 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:22.980 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:22.980 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:22.980 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:22.980 Run-time dependency openssl found: YES 3.1.1 00:01:22.980 Run-time dependency libpcap found: YES 1.10.4 00:01:22.980 Has header "pcap.h" with dependency libpcap: YES 00:01:22.980 Compiler for C supports arguments -Wcast-qual: YES 00:01:22.980 Compiler for C supports arguments -Wdeprecated: YES 00:01:22.980 Compiler for C supports arguments -Wformat: YES 00:01:22.980 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:22.980 Compiler for C supports arguments -Wformat-security: NO 00:01:22.980 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.980 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:22.980 Compiler for C supports arguments -Wnested-externs: YES 00:01:22.980 Compiler for C supports arguments -Wold-style-definition: YES 00:01:22.980 Compiler for C supports arguments -Wpointer-arith: YES 00:01:22.980 Compiler for C supports arguments -Wsign-compare: YES 00:01:22.980 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:22.980 Compiler for C supports arguments -Wundef: YES 00:01:22.980 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.980 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:22.980 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:22.980 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.980 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:22.981 Program objdump found: YES (/usr/bin/objdump) 00:01:22.981 Compiler for C supports arguments -mavx512f: YES 00:01:22.981 Checking if "AVX512 checking" compiles: YES 00:01:22.981 Fetching value of define "__SSE4_2__" : 1 00:01:22.981 Fetching value of define "__AES__" : 1 00:01:22.981 Fetching value of define "__AVX__" : 1 00:01:22.981 Fetching value of define "__AVX2__" : 1 00:01:22.981 Fetching value of define "__AVX512BW__" : 1 00:01:22.981 Fetching value of define "__AVX512CD__" : 1 00:01:22.981 Fetching value of define "__AVX512DQ__" : 1 00:01:22.981 Fetching value of define "__AVX512F__" : 1 00:01:22.981 Fetching value of define "__AVX512VL__" : 1 00:01:22.981 Fetching value of define "__PCLMUL__" : 1 00:01:22.981 Fetching value of define "__RDRND__" : 1 00:01:22.981 Fetching value of define "__RDSEED__" : 1 00:01:22.981 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:22.981 Fetching value of define "__znver1__" : (undefined) 00:01:22.981 Fetching value of define "__znver2__" : (undefined) 00:01:22.981 Fetching value of define "__znver3__" : (undefined) 00:01:22.981 Fetching value of define "__znver4__" : (undefined) 00:01:22.981 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:22.981 Message: lib/log: Defining dependency "log" 00:01:22.981 Message: lib/kvargs: Defining dependency "kvargs" 00:01:22.981 Message: lib/telemetry: Defining dependency "telemetry" 00:01:22.981 Checking for function "getentropy" : NO 00:01:22.981 Message: lib/eal: Defining dependency "eal" 00:01:22.981 Message: lib/ring: Defining dependency "ring" 00:01:22.981 Message: lib/rcu: Defining dependency "rcu" 00:01:22.981 Message: lib/mempool: Defining dependency "mempool" 00:01:22.981 Message: lib/mbuf: Defining dependency "mbuf" 00:01:22.981 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:22.981 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.981 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:22.981 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:22.981 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:22.981 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:22.981 Compiler for C supports arguments -mpclmul: YES 00:01:22.981 Compiler for C supports arguments -maes: YES 00:01:22.981 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.981 Compiler for C supports arguments -mavx512bw: YES 00:01:22.981 Compiler for C supports arguments -mavx512dq: YES 00:01:22.981 Compiler for C supports arguments -mavx512vl: YES 00:01:22.981 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:22.981 Compiler for C supports arguments -mavx2: YES 00:01:22.981 Compiler for C supports arguments -mavx: YES 00:01:22.981 Message: lib/net: Defining dependency "net" 00:01:22.981 Message: lib/meter: Defining dependency "meter" 00:01:22.981 Message: lib/ethdev: Defining dependency "ethdev" 00:01:22.981 Message: lib/pci: Defining dependency "pci" 00:01:22.981 Message: lib/cmdline: Defining dependency "cmdline" 00:01:22.981 Message: lib/hash: Defining dependency "hash" 00:01:22.981 Message: lib/timer: Defining dependency "timer" 00:01:22.981 Message: lib/compressdev: Defining dependency "compressdev" 00:01:22.981 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:22.981 Message: lib/dmadev: Defining dependency "dmadev" 00:01:22.981 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:22.981 Message: lib/power: Defining dependency "power" 00:01:22.981 Message: lib/reorder: Defining dependency "reorder" 00:01:22.981 Message: lib/security: Defining dependency "security" 00:01:22.981 Has header "linux/userfaultfd.h" : YES 00:01:22.981 Has header "linux/vduse.h" : YES 00:01:22.981 Message: lib/vhost: Defining dependency "vhost" 00:01:22.981 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:22.981 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:22.981 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:22.981 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:22.981 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:22.981 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:22.981 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:22.981 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:22.981 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:22.981 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:22.981 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:22.981 Configuring doxy-api-html.conf using configuration 00:01:22.981 Configuring doxy-api-man.conf using configuration 00:01:22.981 Program mandb found: YES (/usr/bin/mandb) 00:01:22.981 Program sphinx-build found: NO 00:01:22.981 Configuring rte_build_config.h using configuration 00:01:22.981 Message: 00:01:22.981 ================= 00:01:22.981 Applications Enabled 00:01:22.981 ================= 00:01:22.981 00:01:22.981 apps: 00:01:22.981 00:01:22.981 00:01:22.981 Message: 00:01:22.981 ================= 00:01:22.981 Libraries Enabled 00:01:22.981 ================= 00:01:22.981 00:01:22.981 libs: 00:01:22.981 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:22.981 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:22.981 cryptodev, dmadev, power, reorder, security, vhost, 00:01:22.981 00:01:22.981 Message: 00:01:22.981 =============== 00:01:22.981 Drivers Enabled 00:01:22.981 =============== 00:01:22.981 00:01:22.981 common: 00:01:22.981 00:01:22.981 bus: 00:01:22.981 pci, vdev, 00:01:22.981 mempool: 00:01:22.981 ring, 00:01:22.981 dma: 00:01:22.981 00:01:22.981 net: 00:01:22.981 00:01:22.981 crypto: 00:01:22.981 00:01:22.981 compress: 00:01:22.981 00:01:22.981 vdpa: 00:01:22.981 00:01:22.981 00:01:22.981 Message: 00:01:22.981 ================= 00:01:22.981 Content Skipped 00:01:22.981 ================= 00:01:22.981 00:01:22.981 apps: 00:01:22.981 dumpcap: explicitly disabled via build config 00:01:22.981 graph: explicitly disabled via build config 00:01:22.981 pdump: explicitly disabled via build config 00:01:22.981 proc-info: explicitly disabled via build config 00:01:22.981 test-acl: explicitly disabled via build config 00:01:22.981 test-bbdev: explicitly disabled via build config 00:01:22.981 test-cmdline: explicitly disabled via build config 00:01:22.981 test-compress-perf: explicitly disabled via build config 00:01:22.981 test-crypto-perf: explicitly disabled via build config 00:01:22.981 test-dma-perf: explicitly disabled via build config 00:01:22.981 test-eventdev: explicitly disabled via build config 00:01:22.981 test-fib: explicitly disabled via build config 00:01:22.981 test-flow-perf: explicitly disabled via build config 00:01:22.981 test-gpudev: explicitly disabled via build config 00:01:22.981 test-mldev: explicitly disabled via build config 00:01:22.981 test-pipeline: explicitly disabled via build config 00:01:22.981 test-pmd: explicitly disabled via build config 00:01:22.981 test-regex: explicitly disabled via build config 00:01:22.981 test-sad: explicitly disabled via build config 00:01:22.981 test-security-perf: explicitly disabled via build config 00:01:22.981 00:01:22.981 libs: 00:01:22.981 argparse: explicitly disabled via build config 00:01:22.981 metrics: explicitly disabled via build config 00:01:22.981 acl: explicitly disabled via build config 00:01:22.981 bbdev: explicitly disabled via build config 00:01:22.981 bitratestats: explicitly disabled via build config 00:01:22.981 bpf: explicitly disabled via build config 00:01:22.981 cfgfile: explicitly disabled via build config 00:01:22.981 distributor: explicitly disabled via build config 00:01:22.981 efd: explicitly disabled via build config 00:01:22.981 eventdev: explicitly disabled via build config 00:01:22.981 dispatcher: explicitly disabled via build config 00:01:22.981 gpudev: explicitly disabled via build config 00:01:22.981 gro: explicitly disabled via build config 00:01:22.981 gso: explicitly disabled via build config 00:01:22.981 ip_frag: explicitly disabled via build config 00:01:22.981 jobstats: explicitly disabled via build config 00:01:22.981 latencystats: explicitly disabled via build config 00:01:22.981 lpm: explicitly disabled via build config 00:01:22.981 member: explicitly disabled via build config 00:01:22.981 pcapng: explicitly disabled via build config 00:01:22.981 rawdev: explicitly disabled via build config 00:01:22.981 regexdev: explicitly disabled via build config 00:01:22.981 mldev: explicitly disabled via build config 00:01:22.981 rib: explicitly disabled via build config 00:01:22.981 sched: explicitly disabled via build config 00:01:22.981 stack: explicitly disabled via build config 00:01:22.981 ipsec: explicitly disabled via build config 00:01:22.981 pdcp: explicitly disabled via build config 00:01:22.981 fib: explicitly disabled via build config 00:01:22.981 port: explicitly disabled via build config 00:01:22.981 pdump: explicitly disabled via build config 00:01:22.981 table: explicitly disabled via build config 00:01:22.981 pipeline: explicitly disabled via build config 00:01:22.981 graph: explicitly disabled via build config 00:01:22.981 node: explicitly disabled via build config 00:01:22.981 00:01:22.981 drivers: 00:01:22.981 common/cpt: not in enabled drivers build config 00:01:22.981 common/dpaax: not in enabled drivers build config 00:01:22.981 common/iavf: not in enabled drivers build config 00:01:22.981 common/idpf: not in enabled drivers build config 00:01:22.981 common/ionic: not in enabled drivers build config 00:01:22.981 common/mvep: not in enabled drivers build config 00:01:22.981 common/octeontx: not in enabled drivers build config 00:01:22.981 bus/auxiliary: not in enabled drivers build config 00:01:22.981 bus/cdx: not in enabled drivers build config 00:01:22.981 bus/dpaa: not in enabled drivers build config 00:01:22.981 bus/fslmc: not in enabled drivers build config 00:01:22.981 bus/ifpga: not in enabled drivers build config 00:01:22.981 bus/platform: not in enabled drivers build config 00:01:22.981 bus/uacce: not in enabled drivers build config 00:01:22.981 bus/vmbus: not in enabled drivers build config 00:01:22.981 common/cnxk: not in enabled drivers build config 00:01:22.981 common/mlx5: not in enabled drivers build config 00:01:22.981 common/nfp: not in enabled drivers build config 00:01:22.981 common/nitrox: not in enabled drivers build config 00:01:22.982 common/qat: not in enabled drivers build config 00:01:22.982 common/sfc_efx: not in enabled drivers build config 00:01:22.982 mempool/bucket: not in enabled drivers build config 00:01:22.982 mempool/cnxk: not in enabled drivers build config 00:01:22.982 mempool/dpaa: not in enabled drivers build config 00:01:22.982 mempool/dpaa2: not in enabled drivers build config 00:01:22.982 mempool/octeontx: not in enabled drivers build config 00:01:22.982 mempool/stack: not in enabled drivers build config 00:01:22.982 dma/cnxk: not in enabled drivers build config 00:01:22.982 dma/dpaa: not in enabled drivers build config 00:01:22.982 dma/dpaa2: not in enabled drivers build config 00:01:22.982 dma/hisilicon: not in enabled drivers build config 00:01:22.982 dma/idxd: not in enabled drivers build config 00:01:22.982 dma/ioat: not in enabled drivers build config 00:01:22.982 dma/skeleton: not in enabled drivers build config 00:01:22.982 net/af_packet: not in enabled drivers build config 00:01:22.982 net/af_xdp: not in enabled drivers build config 00:01:22.982 net/ark: not in enabled drivers build config 00:01:22.982 net/atlantic: not in enabled drivers build config 00:01:22.982 net/avp: not in enabled drivers build config 00:01:22.982 net/axgbe: not in enabled drivers build config 00:01:22.982 net/bnx2x: not in enabled drivers build config 00:01:22.982 net/bnxt: not in enabled drivers build config 00:01:22.982 net/bonding: not in enabled drivers build config 00:01:22.982 net/cnxk: not in enabled drivers build config 00:01:22.982 net/cpfl: not in enabled drivers build config 00:01:22.982 net/cxgbe: not in enabled drivers build config 00:01:22.982 net/dpaa: not in enabled drivers build config 00:01:22.982 net/dpaa2: not in enabled drivers build config 00:01:22.982 net/e1000: not in enabled drivers build config 00:01:22.982 net/ena: not in enabled drivers build config 00:01:22.982 net/enetc: not in enabled drivers build config 00:01:22.982 net/enetfec: not in enabled drivers build config 00:01:22.982 net/enic: not in enabled drivers build config 00:01:22.982 net/failsafe: not in enabled drivers build config 00:01:22.982 net/fm10k: not in enabled drivers build config 00:01:22.982 net/gve: not in enabled drivers build config 00:01:22.982 net/hinic: not in enabled drivers build config 00:01:22.982 net/hns3: not in enabled drivers build config 00:01:22.982 net/i40e: not in enabled drivers build config 00:01:22.982 net/iavf: not in enabled drivers build config 00:01:22.982 net/ice: not in enabled drivers build config 00:01:22.982 net/idpf: not in enabled drivers build config 00:01:22.982 net/igc: not in enabled drivers build config 00:01:22.982 net/ionic: not in enabled drivers build config 00:01:22.982 net/ipn3ke: not in enabled drivers build config 00:01:22.982 net/ixgbe: not in enabled drivers build config 00:01:22.982 net/mana: not in enabled drivers build config 00:01:22.982 net/memif: not in enabled drivers build config 00:01:22.982 net/mlx4: not in enabled drivers build config 00:01:22.982 net/mlx5: not in enabled drivers build config 00:01:22.982 net/mvneta: not in enabled drivers build config 00:01:22.982 net/mvpp2: not in enabled drivers build config 00:01:22.982 net/netvsc: not in enabled drivers build config 00:01:22.982 net/nfb: not in enabled drivers build config 00:01:22.982 net/nfp: not in enabled drivers build config 00:01:22.982 net/ngbe: not in enabled drivers build config 00:01:22.982 net/null: not in enabled drivers build config 00:01:22.982 net/octeontx: not in enabled drivers build config 00:01:22.982 net/octeon_ep: not in enabled drivers build config 00:01:22.982 net/pcap: not in enabled drivers build config 00:01:22.982 net/pfe: not in enabled drivers build config 00:01:22.982 net/qede: not in enabled drivers build config 00:01:22.982 net/ring: not in enabled drivers build config 00:01:22.982 net/sfc: not in enabled drivers build config 00:01:22.982 net/softnic: not in enabled drivers build config 00:01:22.982 net/tap: not in enabled drivers build config 00:01:22.982 net/thunderx: not in enabled drivers build config 00:01:22.982 net/txgbe: not in enabled drivers build config 00:01:22.982 net/vdev_netvsc: not in enabled drivers build config 00:01:22.982 net/vhost: not in enabled drivers build config 00:01:22.982 net/virtio: not in enabled drivers build config 00:01:22.982 net/vmxnet3: not in enabled drivers build config 00:01:22.982 raw/*: missing internal dependency, "rawdev" 00:01:22.982 crypto/armv8: not in enabled drivers build config 00:01:22.982 crypto/bcmfs: not in enabled drivers build config 00:01:22.982 crypto/caam_jr: not in enabled drivers build config 00:01:22.982 crypto/ccp: not in enabled drivers build config 00:01:22.982 crypto/cnxk: not in enabled drivers build config 00:01:22.982 crypto/dpaa_sec: not in enabled drivers build config 00:01:22.982 crypto/dpaa2_sec: not in enabled drivers build config 00:01:22.982 crypto/ipsec_mb: not in enabled drivers build config 00:01:22.982 crypto/mlx5: not in enabled drivers build config 00:01:22.982 crypto/mvsam: not in enabled drivers build config 00:01:22.982 crypto/nitrox: not in enabled drivers build config 00:01:22.982 crypto/null: not in enabled drivers build config 00:01:22.982 crypto/octeontx: not in enabled drivers build config 00:01:22.982 crypto/openssl: not in enabled drivers build config 00:01:22.982 crypto/scheduler: not in enabled drivers build config 00:01:22.982 crypto/uadk: not in enabled drivers build config 00:01:22.982 crypto/virtio: not in enabled drivers build config 00:01:22.982 compress/isal: not in enabled drivers build config 00:01:22.982 compress/mlx5: not in enabled drivers build config 00:01:22.982 compress/nitrox: not in enabled drivers build config 00:01:22.982 compress/octeontx: not in enabled drivers build config 00:01:22.982 compress/zlib: not in enabled drivers build config 00:01:22.982 regex/*: missing internal dependency, "regexdev" 00:01:22.982 ml/*: missing internal dependency, "mldev" 00:01:22.982 vdpa/ifc: not in enabled drivers build config 00:01:22.982 vdpa/mlx5: not in enabled drivers build config 00:01:22.982 vdpa/nfp: not in enabled drivers build config 00:01:22.982 vdpa/sfc: not in enabled drivers build config 00:01:22.982 event/*: missing internal dependency, "eventdev" 00:01:22.982 baseband/*: missing internal dependency, "bbdev" 00:01:22.982 gpu/*: missing internal dependency, "gpudev" 00:01:22.982 00:01:22.982 00:01:22.982 Build targets in project: 84 00:01:22.982 00:01:22.982 DPDK 24.03.0 00:01:22.982 00:01:22.982 User defined options 00:01:22.982 buildtype : debug 00:01:22.982 default_library : shared 00:01:22.982 libdir : lib 00:01:22.982 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.982 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:22.982 c_link_args : 00:01:22.982 cpu_instruction_set: native 00:01:22.982 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:22.982 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:22.982 enable_docs : false 00:01:22.982 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:22.982 enable_kmods : false 00:01:22.982 max_lcores : 128 00:01:22.982 tests : false 00:01:22.982 00:01:22.982 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.982 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:22.982 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:22.982 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:22.982 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:22.982 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:22.982 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:22.982 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:22.982 [7/267] Linking static target lib/librte_kvargs.a 00:01:22.982 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:22.982 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:22.982 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.982 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:22.982 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:22.982 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:22.982 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:22.982 [15/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:22.982 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:22.982 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:22.982 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:22.982 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.982 [20/267] Linking static target lib/librte_log.a 00:01:22.982 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:22.982 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:22.982 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:22.982 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:22.982 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:22.982 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:22.982 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:22.982 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:22.982 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:22.982 [30/267] Linking static target lib/librte_pci.a 00:01:22.982 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:22.982 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:22.982 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:22.982 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:22.982 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:22.982 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:22.982 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:23.242 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:23.242 [39/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.242 [40/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:23.242 [41/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.242 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:23.242 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:23.242 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.242 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.242 [46/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:23.242 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:23.242 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.242 [49/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:23.242 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:23.242 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.242 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.242 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:23.242 [54/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:23.242 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.242 [56/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:23.242 [57/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:23.242 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:23.242 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:23.242 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.242 [61/267] Linking static target lib/librte_telemetry.a 00:01:23.242 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.242 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:23.242 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:23.242 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.242 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.242 [67/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:23.242 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:23.242 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:23.242 [70/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:23.242 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.242 [72/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:23.242 [73/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:23.242 [74/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:23.242 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:23.242 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:23.242 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.242 [78/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:23.242 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:23.242 [80/267] Linking static target lib/librte_meter.a 00:01:23.242 [81/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:23.242 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.242 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.242 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:23.242 [85/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.242 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.242 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.242 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.242 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.242 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.242 [91/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:23.242 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.503 [93/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.503 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:23.503 [95/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:23.503 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:23.503 [97/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.503 [98/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:23.503 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.503 [100/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:23.503 [101/267] Linking static target lib/librte_timer.a 00:01:23.503 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:23.503 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:23.503 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.503 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.503 [106/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:23.503 [107/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:23.503 [108/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:23.503 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.503 [110/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:23.503 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:23.503 [112/267] Linking static target lib/librte_cmdline.a 00:01:23.503 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:23.503 [114/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:23.503 [115/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:23.503 [116/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:23.503 [117/267] Linking static target lib/librte_ring.a 00:01:23.503 [118/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:23.503 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.503 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:23.503 [121/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:23.503 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.503 [123/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:23.503 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.503 [125/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:23.503 [126/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:23.503 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:23.503 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:23.503 [129/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:23.503 [130/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:23.503 [131/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:23.503 [132/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:23.503 [133/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.503 [134/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:23.503 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:23.503 [136/267] Linking static target lib/librte_net.a 00:01:23.503 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:23.503 [138/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:23.503 [139/267] Linking static target lib/librte_dmadev.a 00:01:23.503 [140/267] Linking static target lib/librte_compressdev.a 00:01:23.503 [141/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:23.503 [142/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:23.503 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:23.503 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:23.503 [145/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:23.503 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:23.503 [147/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:23.503 [148/267] Linking static target lib/librte_power.a 00:01:23.503 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:23.503 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:23.503 [151/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:23.504 [152/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:23.504 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:23.504 [154/267] Linking static target lib/librte_mempool.a 00:01:23.504 [155/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:23.504 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:23.504 [157/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:23.504 [158/267] Linking target lib/librte_log.so.24.1 00:01:23.504 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:23.504 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:23.504 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:23.504 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:23.504 [163/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:23.504 [164/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:23.504 [165/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:23.504 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:23.504 [167/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:23.504 [168/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:23.504 [169/267] Linking static target lib/librte_rcu.a 00:01:23.504 [170/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:23.504 [171/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:23.504 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:23.504 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:23.504 [174/267] Linking static target lib/librte_eal.a 00:01:23.504 [175/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.504 [176/267] Linking static target lib/librte_security.a 00:01:23.504 [177/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:23.504 [178/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:23.763 [179/267] Linking static target lib/librte_reorder.a 00:01:23.763 [180/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:23.763 [181/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:23.763 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:23.763 [183/267] Linking static target lib/librte_mbuf.a 00:01:23.763 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:23.763 [185/267] Linking target lib/librte_kvargs.so.24.1 00:01:23.763 [186/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:23.763 [187/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:23.763 [188/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:23.763 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:23.763 [190/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:23.763 [191/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:23.763 [192/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:23.763 [193/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:23.763 [194/267] Linking static target drivers/librte_bus_pci.a 00:01:23.763 [195/267] Linking static target drivers/librte_bus_vdev.a 00:01:23.763 [196/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.763 [197/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:23.763 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:23.763 [199/267] Linking static target lib/librte_hash.a 00:01:23.763 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.763 [201/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.763 [202/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.763 [203/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.763 [204/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:24.022 [205/267] Linking static target lib/librte_cryptodev.a 00:01:24.022 [206/267] Linking target lib/librte_telemetry.so.24.1 00:01:24.022 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:24.022 [208/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:24.022 [209/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:24.022 [210/267] Linking static target drivers/librte_mempool_ring.a 00:01:24.022 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.022 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:24.022 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.281 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.281 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.281 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.281 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.281 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:24.281 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:24.281 [220/267] Linking static target lib/librte_ethdev.a 00:01:24.540 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.540 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.540 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.800 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.800 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.800 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.740 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:25.740 [228/267] Linking static target lib/librte_vhost.a 00:01:25.999 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.907 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.610 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.549 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.549 [233/267] Linking target lib/librte_eal.so.24.1 00:01:35.549 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:35.549 [235/267] Linking target lib/librte_ring.so.24.1 00:01:35.549 [236/267] Linking target lib/librte_meter.so.24.1 00:01:35.549 [237/267] Linking target lib/librte_timer.so.24.1 00:01:35.549 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:35.549 [239/267] Linking target lib/librte_pci.so.24.1 00:01:35.549 [240/267] Linking target lib/librte_dmadev.so.24.1 00:01:35.808 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:35.808 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:35.808 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:35.808 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:35.808 [245/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:35.808 [246/267] Linking target lib/librte_mempool.so.24.1 00:01:35.808 [247/267] Linking target lib/librte_rcu.so.24.1 00:01:35.808 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:35.808 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:35.808 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:35.808 [251/267] Linking target lib/librte_mbuf.so.24.1 00:01:35.808 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:36.068 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:36.068 [254/267] Linking target lib/librte_reorder.so.24.1 00:01:36.068 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:36.068 [256/267] Linking target lib/librte_net.so.24.1 00:01:36.068 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:36.328 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:36.328 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:36.328 [260/267] Linking target lib/librte_cmdline.so.24.1 00:01:36.328 [261/267] Linking target lib/librte_hash.so.24.1 00:01:36.328 [262/267] Linking target lib/librte_security.so.24.1 00:01:36.328 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:36.328 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:36.328 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:36.587 [266/267] Linking target lib/librte_power.so.24.1 00:01:36.587 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:36.587 INFO: autodetecting backend as ninja 00:01:36.587 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:39.881 CC lib/ut_mock/mock.o 00:01:39.881 CC lib/log/log.o 00:01:39.881 CC lib/log/log_flags.o 00:01:39.881 CC lib/ut/ut.o 00:01:39.881 CC lib/log/log_deprecated.o 00:01:39.881 LIB libspdk_ut_mock.a 00:01:39.881 LIB libspdk_log.a 00:01:39.881 LIB libspdk_ut.a 00:01:39.881 SO libspdk_ut_mock.so.6.0 00:01:39.881 SO libspdk_log.so.7.1 00:01:39.881 SO libspdk_ut.so.2.0 00:01:39.881 SYMLINK libspdk_ut_mock.so 00:01:39.881 SYMLINK libspdk_ut.so 00:01:39.881 SYMLINK libspdk_log.so 00:01:40.141 CC lib/util/base64.o 00:01:40.141 CC lib/util/bit_array.o 00:01:40.141 CC lib/util/cpuset.o 00:01:40.141 CC lib/dma/dma.o 00:01:40.141 CC lib/util/crc16.o 00:01:40.141 CC lib/ioat/ioat.o 00:01:40.141 CC lib/util/crc32.o 00:01:40.141 CC lib/util/crc32c.o 00:01:40.141 CXX lib/trace_parser/trace.o 00:01:40.141 CC lib/util/crc32_ieee.o 00:01:40.141 CC lib/util/crc64.o 00:01:40.141 CC lib/util/dif.o 00:01:40.141 CC lib/util/fd.o 00:01:40.141 CC lib/util/fd_group.o 00:01:40.141 CC lib/util/file.o 00:01:40.141 CC lib/util/hexlify.o 00:01:40.141 CC lib/util/iov.o 00:01:40.141 CC lib/util/math.o 00:01:40.427 CC lib/util/net.o 00:01:40.427 CC lib/util/pipe.o 00:01:40.427 CC lib/util/strerror_tls.o 00:01:40.427 CC lib/util/string.o 00:01:40.427 CC lib/util/uuid.o 00:01:40.427 CC lib/util/xor.o 00:01:40.427 CC lib/util/zipf.o 00:01:40.427 CC lib/util/md5.o 00:01:40.427 CC lib/vfio_user/host/vfio_user_pci.o 00:01:40.427 CC lib/vfio_user/host/vfio_user.o 00:01:40.427 LIB libspdk_dma.a 00:01:40.427 SO libspdk_dma.so.5.0 00:01:40.688 LIB libspdk_ioat.a 00:01:40.688 SO libspdk_ioat.so.7.0 00:01:40.688 SYMLINK libspdk_dma.so 00:01:40.688 SYMLINK libspdk_ioat.so 00:01:40.688 LIB libspdk_vfio_user.a 00:01:40.688 SO libspdk_vfio_user.so.5.0 00:01:40.948 LIB libspdk_util.a 00:01:40.948 SYMLINK libspdk_vfio_user.so 00:01:40.948 SO libspdk_util.so.10.1 00:01:40.948 SYMLINK libspdk_util.so 00:01:41.208 LIB libspdk_trace_parser.a 00:01:41.208 SO libspdk_trace_parser.so.6.0 00:01:41.208 SYMLINK libspdk_trace_parser.so 00:01:41.469 CC lib/conf/conf.o 00:01:41.469 CC lib/json/json_parse.o 00:01:41.469 CC lib/json/json_util.o 00:01:41.469 CC lib/json/json_write.o 00:01:41.469 CC lib/rdma_utils/rdma_utils.o 00:01:41.469 CC lib/idxd/idxd.o 00:01:41.469 CC lib/idxd/idxd_user.o 00:01:41.469 CC lib/vmd/vmd.o 00:01:41.469 CC lib/idxd/idxd_kernel.o 00:01:41.469 CC lib/vmd/led.o 00:01:41.469 CC lib/env_dpdk/env.o 00:01:41.469 CC lib/env_dpdk/memory.o 00:01:41.469 CC lib/env_dpdk/pci.o 00:01:41.469 CC lib/env_dpdk/init.o 00:01:41.469 CC lib/env_dpdk/threads.o 00:01:41.469 CC lib/env_dpdk/pci_ioat.o 00:01:41.469 CC lib/env_dpdk/pci_virtio.o 00:01:41.469 CC lib/env_dpdk/pci_vmd.o 00:01:41.469 CC lib/env_dpdk/pci_idxd.o 00:01:41.469 CC lib/env_dpdk/pci_event.o 00:01:41.469 CC lib/env_dpdk/sigbus_handler.o 00:01:41.469 CC lib/env_dpdk/pci_dpdk.o 00:01:41.469 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:41.469 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:41.729 LIB libspdk_conf.a 00:01:41.729 SO libspdk_conf.so.6.0 00:01:41.729 LIB libspdk_rdma_utils.a 00:01:41.729 LIB libspdk_json.a 00:01:41.729 SYMLINK libspdk_conf.so 00:01:41.729 SO libspdk_rdma_utils.so.1.0 00:01:41.729 SO libspdk_json.so.6.0 00:01:41.729 SYMLINK libspdk_rdma_utils.so 00:01:41.729 SYMLINK libspdk_json.so 00:01:41.989 LIB libspdk_idxd.a 00:01:41.989 SO libspdk_idxd.so.12.1 00:01:41.989 LIB libspdk_vmd.a 00:01:41.989 SO libspdk_vmd.so.6.0 00:01:41.989 SYMLINK libspdk_idxd.so 00:01:42.249 SYMLINK libspdk_vmd.so 00:01:42.249 CC lib/rdma_provider/common.o 00:01:42.249 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:42.249 CC lib/jsonrpc/jsonrpc_server.o 00:01:42.249 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:42.249 CC lib/jsonrpc/jsonrpc_client.o 00:01:42.249 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:42.510 LIB libspdk_rdma_provider.a 00:01:42.510 LIB libspdk_jsonrpc.a 00:01:42.510 SO libspdk_rdma_provider.so.7.0 00:01:42.510 SO libspdk_jsonrpc.so.6.0 00:01:42.510 SYMLINK libspdk_rdma_provider.so 00:01:42.510 SYMLINK libspdk_jsonrpc.so 00:01:42.771 LIB libspdk_env_dpdk.a 00:01:42.772 SO libspdk_env_dpdk.so.15.1 00:01:42.772 SYMLINK libspdk_env_dpdk.so 00:01:43.032 CC lib/rpc/rpc.o 00:01:43.293 LIB libspdk_rpc.a 00:01:43.293 SO libspdk_rpc.so.6.0 00:01:43.293 SYMLINK libspdk_rpc.so 00:01:43.554 CC lib/trace/trace.o 00:01:43.554 CC lib/trace/trace_flags.o 00:01:43.554 CC lib/trace/trace_rpc.o 00:01:43.554 CC lib/notify/notify.o 00:01:43.554 CC lib/notify/notify_rpc.o 00:01:43.554 CC lib/keyring/keyring.o 00:01:43.554 CC lib/keyring/keyring_rpc.o 00:01:43.815 LIB libspdk_notify.a 00:01:43.815 SO libspdk_notify.so.6.0 00:01:43.815 LIB libspdk_keyring.a 00:01:43.815 LIB libspdk_trace.a 00:01:43.815 SYMLINK libspdk_notify.so 00:01:43.815 SO libspdk_keyring.so.2.0 00:01:43.815 SO libspdk_trace.so.11.0 00:01:44.075 SYMLINK libspdk_keyring.so 00:01:44.075 SYMLINK libspdk_trace.so 00:01:44.335 CC lib/thread/thread.o 00:01:44.335 CC lib/thread/iobuf.o 00:01:44.335 CC lib/sock/sock.o 00:01:44.335 CC lib/sock/sock_rpc.o 00:01:44.907 LIB libspdk_sock.a 00:01:44.907 SO libspdk_sock.so.10.0 00:01:44.907 SYMLINK libspdk_sock.so 00:01:45.166 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:45.166 CC lib/nvme/nvme_ctrlr.o 00:01:45.166 CC lib/nvme/nvme_fabric.o 00:01:45.166 CC lib/nvme/nvme_ns_cmd.o 00:01:45.166 CC lib/nvme/nvme_ns.o 00:01:45.166 CC lib/nvme/nvme_pcie_common.o 00:01:45.166 CC lib/nvme/nvme_pcie.o 00:01:45.166 CC lib/nvme/nvme_qpair.o 00:01:45.166 CC lib/nvme/nvme.o 00:01:45.166 CC lib/nvme/nvme_quirks.o 00:01:45.166 CC lib/nvme/nvme_transport.o 00:01:45.166 CC lib/nvme/nvme_discovery.o 00:01:45.166 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:45.166 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:45.166 CC lib/nvme/nvme_tcp.o 00:01:45.166 CC lib/nvme/nvme_opal.o 00:01:45.166 CC lib/nvme/nvme_io_msg.o 00:01:45.166 CC lib/nvme/nvme_poll_group.o 00:01:45.166 CC lib/nvme/nvme_zns.o 00:01:45.166 CC lib/nvme/nvme_stubs.o 00:01:45.166 CC lib/nvme/nvme_auth.o 00:01:45.166 CC lib/nvme/nvme_cuse.o 00:01:45.166 CC lib/nvme/nvme_vfio_user.o 00:01:45.166 CC lib/nvme/nvme_rdma.o 00:01:45.734 LIB libspdk_thread.a 00:01:45.734 SO libspdk_thread.so.11.0 00:01:45.734 SYMLINK libspdk_thread.so 00:01:46.307 CC lib/accel/accel.o 00:01:46.307 CC lib/accel/accel_rpc.o 00:01:46.307 CC lib/accel/accel_sw.o 00:01:46.307 CC lib/blob/blobstore.o 00:01:46.307 CC lib/blob/request.o 00:01:46.307 CC lib/blob/zeroes.o 00:01:46.307 CC lib/fsdev/fsdev.o 00:01:46.307 CC lib/blob/blob_bs_dev.o 00:01:46.307 CC lib/fsdev/fsdev_io.o 00:01:46.307 CC lib/fsdev/fsdev_rpc.o 00:01:46.307 CC lib/virtio/virtio.o 00:01:46.307 CC lib/virtio/virtio_vhost_user.o 00:01:46.307 CC lib/vfu_tgt/tgt_endpoint.o 00:01:46.307 CC lib/init/json_config.o 00:01:46.307 CC lib/virtio/virtio_vfio_user.o 00:01:46.307 CC lib/init/subsystem.o 00:01:46.307 CC lib/vfu_tgt/tgt_rpc.o 00:01:46.307 CC lib/virtio/virtio_pci.o 00:01:46.307 CC lib/init/subsystem_rpc.o 00:01:46.307 CC lib/init/rpc.o 00:01:46.568 LIB libspdk_init.a 00:01:46.568 SO libspdk_init.so.6.0 00:01:46.568 LIB libspdk_virtio.a 00:01:46.568 LIB libspdk_vfu_tgt.a 00:01:46.568 SO libspdk_vfu_tgt.so.3.0 00:01:46.568 SO libspdk_virtio.so.7.0 00:01:46.568 SYMLINK libspdk_init.so 00:01:46.829 SYMLINK libspdk_vfu_tgt.so 00:01:46.829 SYMLINK libspdk_virtio.so 00:01:46.829 LIB libspdk_fsdev.a 00:01:46.829 SO libspdk_fsdev.so.2.0 00:01:47.090 SYMLINK libspdk_fsdev.so 00:01:47.090 CC lib/event/app.o 00:01:47.090 CC lib/event/reactor.o 00:01:47.090 CC lib/event/log_rpc.o 00:01:47.090 CC lib/event/app_rpc.o 00:01:47.090 CC lib/event/scheduler_static.o 00:01:47.090 LIB libspdk_accel.a 00:01:47.351 SO libspdk_accel.so.16.0 00:01:47.351 LIB libspdk_nvme.a 00:01:47.351 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:47.351 SYMLINK libspdk_accel.so 00:01:47.351 SO libspdk_nvme.so.15.0 00:01:47.351 LIB libspdk_event.a 00:01:47.351 SO libspdk_event.so.14.0 00:01:47.612 SYMLINK libspdk_event.so 00:01:47.612 SYMLINK libspdk_nvme.so 00:01:47.612 CC lib/bdev/bdev.o 00:01:47.612 CC lib/bdev/bdev_rpc.o 00:01:47.612 CC lib/bdev/bdev_zone.o 00:01:47.612 CC lib/bdev/part.o 00:01:47.612 CC lib/bdev/scsi_nvme.o 00:01:47.873 LIB libspdk_fuse_dispatcher.a 00:01:47.873 SO libspdk_fuse_dispatcher.so.1.0 00:01:48.134 SYMLINK libspdk_fuse_dispatcher.so 00:01:49.077 LIB libspdk_blob.a 00:01:49.077 SO libspdk_blob.so.12.0 00:01:49.077 SYMLINK libspdk_blob.so 00:01:49.337 CC lib/blobfs/blobfs.o 00:01:49.337 CC lib/blobfs/tree.o 00:01:49.337 CC lib/lvol/lvol.o 00:01:50.279 LIB libspdk_blobfs.a 00:01:50.279 LIB libspdk_bdev.a 00:01:50.279 SO libspdk_blobfs.so.11.0 00:01:50.279 SO libspdk_bdev.so.17.0 00:01:50.279 LIB libspdk_lvol.a 00:01:50.279 SYMLINK libspdk_blobfs.so 00:01:50.279 SO libspdk_lvol.so.11.0 00:01:50.279 SYMLINK libspdk_bdev.so 00:01:50.279 SYMLINK libspdk_lvol.so 00:01:50.540 CC lib/scsi/dev.o 00:01:50.540 CC lib/scsi/lun.o 00:01:50.540 CC lib/scsi/port.o 00:01:50.540 CC lib/scsi/scsi.o 00:01:50.540 CC lib/scsi/scsi_bdev.o 00:01:50.540 CC lib/scsi/scsi_pr.o 00:01:50.540 CC lib/scsi/scsi_rpc.o 00:01:50.540 CC lib/scsi/task.o 00:01:50.540 CC lib/nvmf/ctrlr.o 00:01:50.540 CC lib/nvmf/ctrlr_discovery.o 00:01:50.540 CC lib/nvmf/ctrlr_bdev.o 00:01:50.540 CC lib/nvmf/subsystem.o 00:01:50.540 CC lib/ftl/ftl_core.o 00:01:50.540 CC lib/nvmf/nvmf.o 00:01:50.540 CC lib/ftl/ftl_init.o 00:01:50.540 CC lib/nbd/nbd.o 00:01:50.540 CC lib/ftl/ftl_layout.o 00:01:50.540 CC lib/nvmf/nvmf_rpc.o 00:01:50.540 CC lib/ublk/ublk.o 00:01:50.540 CC lib/nbd/nbd_rpc.o 00:01:50.540 CC lib/nvmf/transport.o 00:01:50.540 CC lib/ftl/ftl_debug.o 00:01:50.540 CC lib/ublk/ublk_rpc.o 00:01:50.540 CC lib/nvmf/tcp.o 00:01:50.540 CC lib/ftl/ftl_io.o 00:01:50.540 CC lib/ftl/ftl_sb.o 00:01:50.540 CC lib/nvmf/stubs.o 00:01:50.540 CC lib/ftl/ftl_l2p.o 00:01:50.540 CC lib/nvmf/mdns_server.o 00:01:50.540 CC lib/ftl/ftl_l2p_flat.o 00:01:50.540 CC lib/nvmf/vfio_user.o 00:01:50.540 CC lib/ftl/ftl_nv_cache.o 00:01:50.540 CC lib/ftl/ftl_band.o 00:01:50.540 CC lib/ftl/ftl_band_ops.o 00:01:50.540 CC lib/nvmf/rdma.o 00:01:50.540 CC lib/nvmf/auth.o 00:01:50.540 CC lib/ftl/ftl_writer.o 00:01:50.540 CC lib/ftl/ftl_rq.o 00:01:50.540 CC lib/ftl/ftl_reloc.o 00:01:50.540 CC lib/ftl/ftl_l2p_cache.o 00:01:50.540 CC lib/ftl/ftl_p2l.o 00:01:50.540 CC lib/ftl/ftl_p2l_log.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:50.540 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:50.801 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:50.801 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:50.801 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:50.801 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:50.801 CC lib/ftl/utils/ftl_md.o 00:01:50.801 CC lib/ftl/utils/ftl_conf.o 00:01:50.801 CC lib/ftl/utils/ftl_bitmap.o 00:01:50.801 CC lib/ftl/utils/ftl_mempool.o 00:01:50.801 CC lib/ftl/utils/ftl_property.o 00:01:50.801 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:50.801 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:50.801 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:50.801 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:50.801 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:50.801 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:50.801 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:50.801 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:50.801 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:50.801 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:50.801 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:50.801 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:50.801 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:50.801 CC lib/ftl/base/ftl_base_dev.o 00:01:50.801 CC lib/ftl/base/ftl_base_bdev.o 00:01:50.801 CC lib/ftl/ftl_trace.o 00:01:51.372 LIB libspdk_nbd.a 00:01:51.372 SO libspdk_nbd.so.7.0 00:01:51.372 SYMLINK libspdk_nbd.so 00:01:51.372 LIB libspdk_scsi.a 00:01:51.372 SO libspdk_scsi.so.9.0 00:01:51.632 LIB libspdk_ublk.a 00:01:51.632 SYMLINK libspdk_scsi.so 00:01:51.632 SO libspdk_ublk.so.3.0 00:01:51.632 SYMLINK libspdk_ublk.so 00:01:51.892 LIB libspdk_ftl.a 00:01:51.892 CC lib/vhost/vhost.o 00:01:51.892 CC lib/iscsi/conn.o 00:01:51.893 CC lib/iscsi/iscsi.o 00:01:51.893 CC lib/vhost/vhost_rpc.o 00:01:51.893 CC lib/iscsi/init_grp.o 00:01:51.893 CC lib/iscsi/portal_grp.o 00:01:51.893 CC lib/vhost/vhost_scsi.o 00:01:51.893 CC lib/vhost/vhost_blk.o 00:01:51.893 CC lib/iscsi/param.o 00:01:51.893 CC lib/vhost/rte_vhost_user.o 00:01:51.893 CC lib/iscsi/iscsi_rpc.o 00:01:51.893 CC lib/iscsi/tgt_node.o 00:01:51.893 CC lib/iscsi/iscsi_subsystem.o 00:01:51.893 CC lib/iscsi/task.o 00:01:51.893 SO libspdk_ftl.so.9.0 00:01:52.464 SYMLINK libspdk_ftl.so 00:01:52.724 LIB libspdk_nvmf.a 00:01:52.984 SO libspdk_nvmf.so.20.0 00:01:52.984 LIB libspdk_vhost.a 00:01:52.984 SO libspdk_vhost.so.8.0 00:01:52.984 SYMLINK libspdk_vhost.so 00:01:52.984 SYMLINK libspdk_nvmf.so 00:01:53.244 LIB libspdk_iscsi.a 00:01:53.244 SO libspdk_iscsi.so.8.0 00:01:53.244 SYMLINK libspdk_iscsi.so 00:01:53.814 CC module/env_dpdk/env_dpdk_rpc.o 00:01:53.814 CC module/vfu_device/vfu_virtio.o 00:01:53.814 CC module/vfu_device/vfu_virtio_blk.o 00:01:53.814 CC module/vfu_device/vfu_virtio_scsi.o 00:01:53.814 CC module/vfu_device/vfu_virtio_rpc.o 00:01:53.814 CC module/vfu_device/vfu_virtio_fs.o 00:01:54.073 CC module/accel/error/accel_error.o 00:01:54.073 CC module/accel/error/accel_error_rpc.o 00:01:54.073 LIB libspdk_env_dpdk_rpc.a 00:01:54.073 CC module/accel/iaa/accel_iaa.o 00:01:54.073 CC module/accel/dsa/accel_dsa.o 00:01:54.073 CC module/accel/iaa/accel_iaa_rpc.o 00:01:54.073 CC module/accel/dsa/accel_dsa_rpc.o 00:01:54.073 CC module/accel/ioat/accel_ioat.o 00:01:54.073 CC module/accel/ioat/accel_ioat_rpc.o 00:01:54.073 CC module/keyring/linux/keyring.o 00:01:54.073 CC module/blob/bdev/blob_bdev.o 00:01:54.073 CC module/scheduler/gscheduler/gscheduler.o 00:01:54.073 CC module/keyring/linux/keyring_rpc.o 00:01:54.073 CC module/keyring/file/keyring.o 00:01:54.073 CC module/keyring/file/keyring_rpc.o 00:01:54.073 CC module/sock/posix/posix.o 00:01:54.073 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:54.073 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:54.073 CC module/fsdev/aio/fsdev_aio.o 00:01:54.073 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:54.073 CC module/fsdev/aio/linux_aio_mgr.o 00:01:54.073 SO libspdk_env_dpdk_rpc.so.6.0 00:01:54.333 SYMLINK libspdk_env_dpdk_rpc.so 00:01:54.333 LIB libspdk_keyring_linux.a 00:01:54.333 LIB libspdk_scheduler_gscheduler.a 00:01:54.333 LIB libspdk_keyring_file.a 00:01:54.333 LIB libspdk_accel_error.a 00:01:54.333 LIB libspdk_accel_ioat.a 00:01:54.333 LIB libspdk_scheduler_dpdk_governor.a 00:01:54.333 SO libspdk_keyring_linux.so.1.0 00:01:54.333 LIB libspdk_accel_iaa.a 00:01:54.333 SO libspdk_scheduler_gscheduler.so.4.0 00:01:54.333 SO libspdk_keyring_file.so.2.0 00:01:54.333 LIB libspdk_scheduler_dynamic.a 00:01:54.333 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:54.333 SO libspdk_accel_error.so.2.0 00:01:54.333 SO libspdk_accel_ioat.so.6.0 00:01:54.333 SO libspdk_accel_iaa.so.3.0 00:01:54.333 SYMLINK libspdk_keyring_linux.so 00:01:54.333 SO libspdk_scheduler_dynamic.so.4.0 00:01:54.333 LIB libspdk_accel_dsa.a 00:01:54.333 SYMLINK libspdk_scheduler_gscheduler.so 00:01:54.334 LIB libspdk_blob_bdev.a 00:01:54.334 SYMLINK libspdk_keyring_file.so 00:01:54.334 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:54.334 SYMLINK libspdk_accel_error.so 00:01:54.595 SYMLINK libspdk_accel_ioat.so 00:01:54.595 SYMLINK libspdk_accel_iaa.so 00:01:54.595 SO libspdk_blob_bdev.so.12.0 00:01:54.595 SO libspdk_accel_dsa.so.5.0 00:01:54.595 SYMLINK libspdk_scheduler_dynamic.so 00:01:54.595 LIB libspdk_vfu_device.a 00:01:54.595 SYMLINK libspdk_blob_bdev.so 00:01:54.595 SYMLINK libspdk_accel_dsa.so 00:01:54.595 SO libspdk_vfu_device.so.3.0 00:01:54.595 SYMLINK libspdk_vfu_device.so 00:01:54.856 LIB libspdk_fsdev_aio.a 00:01:54.856 SO libspdk_fsdev_aio.so.1.0 00:01:54.856 LIB libspdk_sock_posix.a 00:01:54.856 SYMLINK libspdk_fsdev_aio.so 00:01:54.856 SO libspdk_sock_posix.so.6.0 00:01:55.116 SYMLINK libspdk_sock_posix.so 00:01:55.116 CC module/bdev/delay/vbdev_delay.o 00:01:55.116 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:55.116 CC module/bdev/error/vbdev_error.o 00:01:55.116 CC module/bdev/error/vbdev_error_rpc.o 00:01:55.116 CC module/bdev/aio/bdev_aio.o 00:01:55.116 CC module/bdev/gpt/gpt.o 00:01:55.116 CC module/bdev/aio/bdev_aio_rpc.o 00:01:55.116 CC module/bdev/null/bdev_null.o 00:01:55.116 CC module/bdev/gpt/vbdev_gpt.o 00:01:55.116 CC module/bdev/lvol/vbdev_lvol.o 00:01:55.116 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:55.116 CC module/bdev/null/bdev_null_rpc.o 00:01:55.116 CC module/bdev/nvme/bdev_nvme.o 00:01:55.116 CC module/bdev/raid/bdev_raid.o 00:01:55.116 CC module/bdev/nvme/nvme_rpc.o 00:01:55.116 CC module/bdev/raid/bdev_raid_rpc.o 00:01:55.116 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:55.116 CC module/bdev/malloc/bdev_malloc.o 00:01:55.116 CC module/bdev/nvme/bdev_mdns_client.o 00:01:55.116 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:55.116 CC module/blobfs/bdev/blobfs_bdev.o 00:01:55.116 CC module/bdev/raid/bdev_raid_sb.o 00:01:55.116 CC module/bdev/split/vbdev_split.o 00:01:55.116 CC module/bdev/nvme/vbdev_opal.o 00:01:55.116 CC module/bdev/raid/raid0.o 00:01:55.116 CC module/bdev/iscsi/bdev_iscsi.o 00:01:55.116 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:55.116 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:55.116 CC module/bdev/split/vbdev_split_rpc.o 00:01:55.116 CC module/bdev/raid/raid1.o 00:01:55.116 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:55.116 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:55.116 CC module/bdev/passthru/vbdev_passthru.o 00:01:55.116 CC module/bdev/raid/concat.o 00:01:55.116 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:55.116 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:55.116 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:55.116 CC module/bdev/ftl/bdev_ftl.o 00:01:55.116 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:55.116 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:55.116 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:55.116 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:55.377 LIB libspdk_blobfs_bdev.a 00:01:55.377 SO libspdk_blobfs_bdev.so.6.0 00:01:55.377 LIB libspdk_bdev_null.a 00:01:55.377 LIB libspdk_bdev_error.a 00:01:55.377 LIB libspdk_bdev_gpt.a 00:01:55.377 SO libspdk_bdev_null.so.6.0 00:01:55.377 LIB libspdk_bdev_split.a 00:01:55.638 SO libspdk_bdev_error.so.6.0 00:01:55.638 SO libspdk_bdev_gpt.so.6.0 00:01:55.638 SYMLINK libspdk_blobfs_bdev.so 00:01:55.638 SO libspdk_bdev_split.so.6.0 00:01:55.638 LIB libspdk_bdev_ftl.a 00:01:55.638 LIB libspdk_bdev_delay.a 00:01:55.638 LIB libspdk_bdev_passthru.a 00:01:55.638 SYMLINK libspdk_bdev_null.so 00:01:55.638 LIB libspdk_bdev_aio.a 00:01:55.638 SO libspdk_bdev_ftl.so.6.0 00:01:55.638 LIB libspdk_bdev_zone_block.a 00:01:55.638 SYMLINK libspdk_bdev_error.so 00:01:55.638 SYMLINK libspdk_bdev_gpt.so 00:01:55.638 SYMLINK libspdk_bdev_split.so 00:01:55.638 SO libspdk_bdev_passthru.so.6.0 00:01:55.638 SO libspdk_bdev_delay.so.6.0 00:01:55.638 LIB libspdk_bdev_iscsi.a 00:01:55.638 LIB libspdk_bdev_malloc.a 00:01:55.638 SO libspdk_bdev_aio.so.6.0 00:01:55.638 SO libspdk_bdev_zone_block.so.6.0 00:01:55.638 SO libspdk_bdev_iscsi.so.6.0 00:01:55.638 SO libspdk_bdev_malloc.so.6.0 00:01:55.638 SYMLINK libspdk_bdev_ftl.so 00:01:55.638 SYMLINK libspdk_bdev_delay.so 00:01:55.638 SYMLINK libspdk_bdev_passthru.so 00:01:55.638 SYMLINK libspdk_bdev_aio.so 00:01:55.638 SYMLINK libspdk_bdev_zone_block.so 00:01:55.638 LIB libspdk_bdev_lvol.a 00:01:55.638 SYMLINK libspdk_bdev_iscsi.so 00:01:55.638 SYMLINK libspdk_bdev_malloc.so 00:01:55.638 LIB libspdk_bdev_virtio.a 00:01:55.638 SO libspdk_bdev_lvol.so.6.0 00:01:55.899 SO libspdk_bdev_virtio.so.6.0 00:01:55.899 SYMLINK libspdk_bdev_lvol.so 00:01:55.899 SYMLINK libspdk_bdev_virtio.so 00:01:56.160 LIB libspdk_bdev_raid.a 00:01:56.160 SO libspdk_bdev_raid.so.6.0 00:01:56.160 SYMLINK libspdk_bdev_raid.so 00:01:57.543 LIB libspdk_bdev_nvme.a 00:01:57.543 SO libspdk_bdev_nvme.so.7.1 00:01:57.804 SYMLINK libspdk_bdev_nvme.so 00:01:58.391 CC module/event/subsystems/vmd/vmd.o 00:01:58.391 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.391 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.391 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.391 CC module/event/subsystems/sock/sock.o 00:01:58.391 CC module/event/subsystems/keyring/keyring.o 00:01:58.391 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.391 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.391 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.391 CC module/event/subsystems/fsdev/fsdev.o 00:01:58.651 LIB libspdk_event_scheduler.a 00:01:58.651 LIB libspdk_event_keyring.a 00:01:58.651 LIB libspdk_event_vmd.a 00:01:58.651 LIB libspdk_event_vhost_blk.a 00:01:58.651 LIB libspdk_event_sock.a 00:01:58.651 LIB libspdk_event_iobuf.a 00:01:58.651 LIB libspdk_event_fsdev.a 00:01:58.651 LIB libspdk_event_vfu_tgt.a 00:01:58.651 SO libspdk_event_scheduler.so.4.0 00:01:58.651 SO libspdk_event_keyring.so.1.0 00:01:58.651 SO libspdk_event_vmd.so.6.0 00:01:58.651 SO libspdk_event_vhost_blk.so.3.0 00:01:58.651 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.651 SO libspdk_event_sock.so.5.0 00:01:58.651 SO libspdk_event_fsdev.so.1.0 00:01:58.651 SO libspdk_event_iobuf.so.3.0 00:01:58.651 SYMLINK libspdk_event_scheduler.so 00:01:58.651 SYMLINK libspdk_event_keyring.so 00:01:58.651 SYMLINK libspdk_event_fsdev.so 00:01:58.651 SYMLINK libspdk_event_vhost_blk.so 00:01:58.651 SYMLINK libspdk_event_vmd.so 00:01:58.651 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.651 SYMLINK libspdk_event_sock.so 00:01:58.651 SYMLINK libspdk_event_iobuf.so 00:01:59.220 CC module/event/subsystems/accel/accel.o 00:01:59.220 LIB libspdk_event_accel.a 00:01:59.220 SO libspdk_event_accel.so.6.0 00:01:59.220 SYMLINK libspdk_event_accel.so 00:01:59.791 CC module/event/subsystems/bdev/bdev.o 00:01:59.791 LIB libspdk_event_bdev.a 00:01:59.791 SO libspdk_event_bdev.so.6.0 00:02:00.052 SYMLINK libspdk_event_bdev.so 00:02:00.312 CC module/event/subsystems/scsi/scsi.o 00:02:00.312 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.312 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.312 CC module/event/subsystems/ublk/ublk.o 00:02:00.312 CC module/event/subsystems/nbd/nbd.o 00:02:00.572 LIB libspdk_event_ublk.a 00:02:00.572 LIB libspdk_event_nbd.a 00:02:00.572 LIB libspdk_event_scsi.a 00:02:00.572 SO libspdk_event_ublk.so.3.0 00:02:00.572 SO libspdk_event_nbd.so.6.0 00:02:00.572 SO libspdk_event_scsi.so.6.0 00:02:00.572 LIB libspdk_event_nvmf.a 00:02:00.572 SYMLINK libspdk_event_ublk.so 00:02:00.572 SYMLINK libspdk_event_nbd.so 00:02:00.572 SYMLINK libspdk_event_scsi.so 00:02:00.572 SO libspdk_event_nvmf.so.6.0 00:02:00.836 SYMLINK libspdk_event_nvmf.so 00:02:01.098 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:01.098 CC module/event/subsystems/iscsi/iscsi.o 00:02:01.098 LIB libspdk_event_vhost_scsi.a 00:02:01.098 LIB libspdk_event_iscsi.a 00:02:01.098 SO libspdk_event_vhost_scsi.so.3.0 00:02:01.098 SO libspdk_event_iscsi.so.6.0 00:02:01.360 SYMLINK libspdk_event_vhost_scsi.so 00:02:01.360 SYMLINK libspdk_event_iscsi.so 00:02:01.360 SO libspdk.so.6.0 00:02:01.360 SYMLINK libspdk.so 00:02:01.931 CXX app/trace/trace.o 00:02:01.931 CC app/trace_record/trace_record.o 00:02:01.931 CC app/spdk_nvme_perf/perf.o 00:02:01.931 CC app/spdk_lspci/spdk_lspci.o 00:02:01.931 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.931 CC test/rpc_client/rpc_client_test.o 00:02:01.931 CC app/spdk_top/spdk_top.o 00:02:01.931 CC app/spdk_nvme_identify/identify.o 00:02:01.931 TEST_HEADER include/spdk/accel.h 00:02:01.931 TEST_HEADER include/spdk/accel_module.h 00:02:01.931 TEST_HEADER include/spdk/assert.h 00:02:01.931 TEST_HEADER include/spdk/barrier.h 00:02:01.931 TEST_HEADER include/spdk/base64.h 00:02:01.931 TEST_HEADER include/spdk/bdev.h 00:02:01.931 TEST_HEADER include/spdk/bdev_module.h 00:02:01.931 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.931 TEST_HEADER include/spdk/bit_array.h 00:02:01.931 TEST_HEADER include/spdk/bit_pool.h 00:02:01.931 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.931 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.931 TEST_HEADER include/spdk/blobfs.h 00:02:01.931 TEST_HEADER include/spdk/blob.h 00:02:01.931 TEST_HEADER include/spdk/conf.h 00:02:01.931 TEST_HEADER include/spdk/config.h 00:02:01.931 TEST_HEADER include/spdk/cpuset.h 00:02:01.931 TEST_HEADER include/spdk/crc16.h 00:02:01.931 TEST_HEADER include/spdk/crc64.h 00:02:01.931 TEST_HEADER include/spdk/crc32.h 00:02:01.931 TEST_HEADER include/spdk/dif.h 00:02:01.931 TEST_HEADER include/spdk/dma.h 00:02:01.931 TEST_HEADER include/spdk/endian.h 00:02:01.931 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.931 TEST_HEADER include/spdk/env.h 00:02:01.931 TEST_HEADER include/spdk/event.h 00:02:01.931 TEST_HEADER include/spdk/fd_group.h 00:02:01.931 TEST_HEADER include/spdk/file.h 00:02:01.931 TEST_HEADER include/spdk/fd.h 00:02:01.931 TEST_HEADER include/spdk/fsdev.h 00:02:01.931 TEST_HEADER include/spdk/fsdev_module.h 00:02:01.931 TEST_HEADER include/spdk/ftl.h 00:02:01.931 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:01.931 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.931 TEST_HEADER include/spdk/hexlify.h 00:02:01.931 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.931 CC app/nvmf_tgt/nvmf_main.o 00:02:01.931 TEST_HEADER include/spdk/histogram_data.h 00:02:01.931 TEST_HEADER include/spdk/idxd.h 00:02:01.931 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.931 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.931 CC app/spdk_dd/spdk_dd.o 00:02:01.931 TEST_HEADER include/spdk/init.h 00:02:01.931 TEST_HEADER include/spdk/ioat.h 00:02:01.931 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.931 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.931 TEST_HEADER include/spdk/json.h 00:02:01.931 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.931 TEST_HEADER include/spdk/keyring.h 00:02:01.931 TEST_HEADER include/spdk/keyring_module.h 00:02:01.931 TEST_HEADER include/spdk/likely.h 00:02:01.931 TEST_HEADER include/spdk/log.h 00:02:01.931 TEST_HEADER include/spdk/lvol.h 00:02:01.931 TEST_HEADER include/spdk/md5.h 00:02:01.931 TEST_HEADER include/spdk/memory.h 00:02:01.931 TEST_HEADER include/spdk/mmio.h 00:02:01.931 TEST_HEADER include/spdk/nbd.h 00:02:01.931 TEST_HEADER include/spdk/notify.h 00:02:01.931 TEST_HEADER include/spdk/net.h 00:02:01.931 TEST_HEADER include/spdk/nvme.h 00:02:01.931 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.931 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.931 CC app/spdk_tgt/spdk_tgt.o 00:02:01.931 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.931 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.931 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.931 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.931 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.931 TEST_HEADER include/spdk/nvmf.h 00:02:01.931 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.931 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.931 TEST_HEADER include/spdk/opal.h 00:02:01.931 TEST_HEADER include/spdk/opal_spec.h 00:02:01.931 TEST_HEADER include/spdk/pci_ids.h 00:02:01.931 TEST_HEADER include/spdk/pipe.h 00:02:01.931 TEST_HEADER include/spdk/queue.h 00:02:01.931 TEST_HEADER include/spdk/reduce.h 00:02:01.931 TEST_HEADER include/spdk/rpc.h 00:02:01.931 TEST_HEADER include/spdk/scheduler.h 00:02:01.931 TEST_HEADER include/spdk/scsi.h 00:02:01.931 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.931 TEST_HEADER include/spdk/sock.h 00:02:01.931 TEST_HEADER include/spdk/stdinc.h 00:02:01.931 TEST_HEADER include/spdk/string.h 00:02:01.931 TEST_HEADER include/spdk/thread.h 00:02:01.931 TEST_HEADER include/spdk/trace.h 00:02:01.931 TEST_HEADER include/spdk/trace_parser.h 00:02:01.931 TEST_HEADER include/spdk/tree.h 00:02:01.931 TEST_HEADER include/spdk/util.h 00:02:01.931 TEST_HEADER include/spdk/ublk.h 00:02:01.931 TEST_HEADER include/spdk/uuid.h 00:02:01.931 TEST_HEADER include/spdk/version.h 00:02:01.931 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.931 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.931 TEST_HEADER include/spdk/vhost.h 00:02:01.931 TEST_HEADER include/spdk/vmd.h 00:02:01.931 TEST_HEADER include/spdk/xor.h 00:02:01.931 TEST_HEADER include/spdk/zipf.h 00:02:01.931 CXX test/cpp_headers/accel_module.o 00:02:01.931 CXX test/cpp_headers/accel.o 00:02:01.931 CXX test/cpp_headers/assert.o 00:02:01.932 CXX test/cpp_headers/barrier.o 00:02:01.932 CXX test/cpp_headers/base64.o 00:02:01.932 CXX test/cpp_headers/bdev.o 00:02:01.932 CXX test/cpp_headers/bdev_module.o 00:02:01.932 CXX test/cpp_headers/bdev_zone.o 00:02:01.932 CXX test/cpp_headers/bit_array.o 00:02:01.932 CXX test/cpp_headers/bit_pool.o 00:02:01.932 CXX test/cpp_headers/blob_bdev.o 00:02:01.932 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.932 CXX test/cpp_headers/blobfs.o 00:02:01.932 CXX test/cpp_headers/conf.o 00:02:01.932 CXX test/cpp_headers/blob.o 00:02:01.932 CXX test/cpp_headers/config.o 00:02:01.932 CXX test/cpp_headers/cpuset.o 00:02:02.196 CXX test/cpp_headers/crc16.o 00:02:02.197 CXX test/cpp_headers/crc64.o 00:02:02.197 CXX test/cpp_headers/crc32.o 00:02:02.197 CXX test/cpp_headers/dif.o 00:02:02.197 CXX test/cpp_headers/dma.o 00:02:02.197 CXX test/cpp_headers/endian.o 00:02:02.197 CXX test/cpp_headers/env_dpdk.o 00:02:02.197 CXX test/cpp_headers/env.o 00:02:02.197 CXX test/cpp_headers/event.o 00:02:02.197 CXX test/cpp_headers/fd_group.o 00:02:02.197 CXX test/cpp_headers/fd.o 00:02:02.197 CXX test/cpp_headers/file.o 00:02:02.197 CXX test/cpp_headers/fsdev.o 00:02:02.197 CXX test/cpp_headers/fsdev_module.o 00:02:02.197 CXX test/cpp_headers/fuse_dispatcher.o 00:02:02.197 CXX test/cpp_headers/ftl.o 00:02:02.197 CXX test/cpp_headers/gpt_spec.o 00:02:02.197 CXX test/cpp_headers/hexlify.o 00:02:02.197 CXX test/cpp_headers/histogram_data.o 00:02:02.197 CXX test/cpp_headers/idxd.o 00:02:02.197 CXX test/cpp_headers/idxd_spec.o 00:02:02.197 CXX test/cpp_headers/ioat.o 00:02:02.197 CXX test/cpp_headers/init.o 00:02:02.197 CXX test/cpp_headers/ioat_spec.o 00:02:02.197 CXX test/cpp_headers/json.o 00:02:02.197 CXX test/cpp_headers/jsonrpc.o 00:02:02.197 CXX test/cpp_headers/keyring.o 00:02:02.197 CXX test/cpp_headers/iscsi_spec.o 00:02:02.197 CXX test/cpp_headers/likely.o 00:02:02.197 CXX test/cpp_headers/keyring_module.o 00:02:02.197 CC examples/ioat/perf/perf.o 00:02:02.197 CXX test/cpp_headers/log.o 00:02:02.197 CXX test/cpp_headers/lvol.o 00:02:02.197 CXX test/cpp_headers/md5.o 00:02:02.197 CXX test/cpp_headers/nbd.o 00:02:02.197 CXX test/cpp_headers/memory.o 00:02:02.197 CXX test/cpp_headers/notify.o 00:02:02.197 CXX test/cpp_headers/mmio.o 00:02:02.197 CXX test/cpp_headers/nvme.o 00:02:02.197 CC examples/ioat/verify/verify.o 00:02:02.197 CXX test/cpp_headers/net.o 00:02:02.197 CXX test/cpp_headers/nvme_intel.o 00:02:02.197 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:02.197 CXX test/cpp_headers/nvme_spec.o 00:02:02.197 CXX test/cpp_headers/nvme_ocssd.o 00:02:02.197 CXX test/cpp_headers/nvme_zns.o 00:02:02.197 CXX test/cpp_headers/nvmf_cmd.o 00:02:02.197 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:02.197 CXX test/cpp_headers/nvmf_transport.o 00:02:02.197 CC app/fio/nvme/fio_plugin.o 00:02:02.197 CC examples/util/zipf/zipf.o 00:02:02.197 CXX test/cpp_headers/nvmf.o 00:02:02.197 CXX test/cpp_headers/nvmf_spec.o 00:02:02.197 CXX test/cpp_headers/opal.o 00:02:02.197 CXX test/cpp_headers/opal_spec.o 00:02:02.197 CXX test/cpp_headers/pci_ids.o 00:02:02.197 CXX test/cpp_headers/pipe.o 00:02:02.197 CC test/app/histogram_perf/histogram_perf.o 00:02:02.197 CXX test/cpp_headers/queue.o 00:02:02.197 CXX test/cpp_headers/rpc.o 00:02:02.197 CXX test/cpp_headers/reduce.o 00:02:02.197 CXX test/cpp_headers/scheduler.o 00:02:02.197 CXX test/cpp_headers/scsi.o 00:02:02.197 CXX test/cpp_headers/stdinc.o 00:02:02.197 CXX test/cpp_headers/scsi_spec.o 00:02:02.197 CXX test/cpp_headers/string.o 00:02:02.197 CC test/app/jsoncat/jsoncat.o 00:02:02.197 CC test/env/memory/memory_ut.o 00:02:02.197 CC test/env/vtophys/vtophys.o 00:02:02.197 CXX test/cpp_headers/thread.o 00:02:02.197 CXX test/cpp_headers/sock.o 00:02:02.197 CC test/thread/poller_perf/poller_perf.o 00:02:02.197 CXX test/cpp_headers/trace.o 00:02:02.197 CC test/app/stub/stub.o 00:02:02.197 CXX test/cpp_headers/trace_parser.o 00:02:02.197 CXX test/cpp_headers/tree.o 00:02:02.197 CXX test/cpp_headers/ublk.o 00:02:02.197 CXX test/cpp_headers/util.o 00:02:02.197 CXX test/cpp_headers/uuid.o 00:02:02.197 CXX test/cpp_headers/version.o 00:02:02.197 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.197 CC test/env/pci/pci_ut.o 00:02:02.197 CXX test/cpp_headers/vhost.o 00:02:02.197 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.197 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:02.197 CXX test/cpp_headers/vmd.o 00:02:02.197 CXX test/cpp_headers/zipf.o 00:02:02.197 CXX test/cpp_headers/xor.o 00:02:02.197 CC test/dma/test_dma/test_dma.o 00:02:02.197 CC test/app/bdev_svc/bdev_svc.o 00:02:02.197 LINK rpc_client_test 00:02:02.197 CC app/fio/bdev/fio_plugin.o 00:02:02.197 LINK spdk_lspci 00:02:02.468 LINK spdk_nvme_discover 00:02:02.468 LINK nvmf_tgt 00:02:02.468 LINK interrupt_tgt 00:02:02.468 LINK iscsi_tgt 00:02:02.732 LINK spdk_trace_record 00:02:02.732 LINK spdk_tgt 00:02:02.732 LINK histogram_perf 00:02:02.732 LINK spdk_trace 00:02:02.732 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.732 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.732 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.992 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.992 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.992 LINK ioat_perf 00:02:03.251 LINK jsoncat 00:02:03.251 LINK spdk_dd 00:02:03.251 LINK zipf 00:02:03.251 LINK stub 00:02:03.251 LINK poller_perf 00:02:03.251 LINK bdev_svc 00:02:03.251 LINK vtophys 00:02:03.251 LINK env_dpdk_post_init 00:02:03.251 LINK verify 00:02:03.510 CC app/vhost/vhost.o 00:02:03.510 LINK pci_ut 00:02:03.510 LINK nvme_fuzz 00:02:03.510 LINK vhost_fuzz 00:02:03.510 LINK spdk_bdev 00:02:03.510 LINK test_dma 00:02:03.770 LINK spdk_nvme 00:02:03.770 LINK vhost 00:02:03.770 LINK spdk_top 00:02:03.770 LINK mem_callbacks 00:02:03.770 LINK spdk_nvme_identify 00:02:03.770 LINK spdk_nvme_perf 00:02:03.770 CC test/event/event_perf/event_perf.o 00:02:03.770 CC test/event/reactor_perf/reactor_perf.o 00:02:03.770 CC test/event/reactor/reactor.o 00:02:03.770 CC examples/sock/hello_world/hello_sock.o 00:02:03.770 CC examples/idxd/perf/perf.o 00:02:03.770 CC examples/vmd/lsvmd/lsvmd.o 00:02:03.770 CC examples/vmd/led/led.o 00:02:03.770 CC test/event/app_repeat/app_repeat.o 00:02:03.770 CC test/event/scheduler/scheduler.o 00:02:03.770 CC examples/thread/thread/thread_ex.o 00:02:04.050 LINK event_perf 00:02:04.050 LINK reactor 00:02:04.050 LINK reactor_perf 00:02:04.050 LINK lsvmd 00:02:04.050 LINK led 00:02:04.050 LINK app_repeat 00:02:04.050 LINK hello_sock 00:02:04.050 LINK memory_ut 00:02:04.050 LINK scheduler 00:02:04.050 LINK idxd_perf 00:02:04.050 LINK thread 00:02:04.309 CC test/nvme/overhead/overhead.o 00:02:04.309 CC test/nvme/aer/aer.o 00:02:04.309 CC test/nvme/err_injection/err_injection.o 00:02:04.309 CC test/nvme/boot_partition/boot_partition.o 00:02:04.309 CC test/nvme/reserve/reserve.o 00:02:04.309 CC test/nvme/reset/reset.o 00:02:04.309 CC test/nvme/simple_copy/simple_copy.o 00:02:04.309 CC test/nvme/cuse/cuse.o 00:02:04.309 CC test/nvme/e2edp/nvme_dp.o 00:02:04.309 CC test/nvme/startup/startup.o 00:02:04.309 CC test/nvme/sgl/sgl.o 00:02:04.309 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:04.309 CC test/nvme/fused_ordering/fused_ordering.o 00:02:04.309 CC test/nvme/compliance/nvme_compliance.o 00:02:04.309 CC test/nvme/connect_stress/connect_stress.o 00:02:04.309 CC test/nvme/fdp/fdp.o 00:02:04.309 CC test/blobfs/mkfs/mkfs.o 00:02:04.309 CC test/accel/dif/dif.o 00:02:04.309 CC test/lvol/esnap/esnap.o 00:02:04.569 LINK boot_partition 00:02:04.569 LINK startup 00:02:04.569 LINK err_injection 00:02:04.569 LINK connect_stress 00:02:04.569 LINK doorbell_aers 00:02:04.569 LINK fused_ordering 00:02:04.569 LINK reserve 00:02:04.569 LINK simple_copy 00:02:04.569 LINK mkfs 00:02:04.569 LINK aer 00:02:04.569 LINK overhead 00:02:04.569 LINK sgl 00:02:04.569 LINK reset 00:02:04.569 LINK nvme_dp 00:02:04.569 LINK nvme_compliance 00:02:04.569 CC examples/nvme/hello_world/hello_world.o 00:02:04.569 CC examples/nvme/reconnect/reconnect.o 00:02:04.569 CC examples/nvme/hotplug/hotplug.o 00:02:04.569 CC examples/nvme/abort/abort.o 00:02:04.569 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:04.569 CC examples/nvme/arbitration/arbitration.o 00:02:04.569 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:04.569 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:04.569 LINK fdp 00:02:04.569 LINK iscsi_fuzz 00:02:04.829 CC examples/accel/perf/accel_perf.o 00:02:04.829 CC examples/blob/hello_world/hello_blob.o 00:02:04.829 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:04.829 CC examples/blob/cli/blobcli.o 00:02:04.829 LINK cmb_copy 00:02:04.829 LINK pmr_persistence 00:02:04.829 LINK hello_world 00:02:04.829 LINK hotplug 00:02:05.090 LINK dif 00:02:05.090 LINK arbitration 00:02:05.090 LINK reconnect 00:02:05.090 LINK abort 00:02:05.090 LINK hello_blob 00:02:05.090 LINK nvme_manage 00:02:05.090 LINK hello_fsdev 00:02:05.351 LINK accel_perf 00:02:05.351 LINK blobcli 00:02:05.611 LINK cuse 00:02:05.611 CC test/bdev/bdevio/bdevio.o 00:02:05.872 CC examples/bdev/hello_world/hello_bdev.o 00:02:05.872 CC examples/bdev/bdevperf/bdevperf.o 00:02:05.872 LINK bdevio 00:02:06.131 LINK hello_bdev 00:02:06.705 LINK bdevperf 00:02:07.277 CC examples/nvmf/nvmf/nvmf.o 00:02:07.537 LINK nvmf 00:02:08.918 LINK esnap 00:02:09.488 00:02:09.488 real 0m56.091s 00:02:09.488 user 8m7.793s 00:02:09.488 sys 5m30.613s 00:02:09.488 13:52:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:09.488 13:52:15 make -- common/autotest_common.sh@10 -- $ set +x 00:02:09.488 ************************************ 00:02:09.488 END TEST make 00:02:09.488 ************************************ 00:02:09.488 13:52:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:09.488 13:52:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:09.488 13:52:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:09.488 13:52:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.488 13:52:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:09.488 13:52:15 -- pm/common@44 -- $ pid=2398617 00:02:09.488 13:52:15 -- pm/common@50 -- $ kill -TERM 2398617 00:02:09.488 13:52:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.488 13:52:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:09.488 13:52:15 -- pm/common@44 -- $ pid=2398618 00:02:09.488 13:52:15 -- pm/common@50 -- $ kill -TERM 2398618 00:02:09.488 13:52:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.488 13:52:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:09.488 13:52:15 -- pm/common@44 -- $ pid=2398620 00:02:09.488 13:52:15 -- pm/common@50 -- $ kill -TERM 2398620 00:02:09.488 13:52:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.488 13:52:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:09.488 13:52:15 -- pm/common@44 -- $ pid=2398643 00:02:09.488 13:52:15 -- pm/common@50 -- $ sudo -E kill -TERM 2398643 00:02:09.488 13:52:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:09.488 13:52:15 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:09.488 13:52:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:09.488 13:52:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:09.488 13:52:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:09.488 13:52:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:09.488 13:52:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:09.488 13:52:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:09.488 13:52:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:09.488 13:52:15 -- scripts/common.sh@336 -- # IFS=.-: 00:02:09.488 13:52:15 -- scripts/common.sh@336 -- # read -ra ver1 00:02:09.488 13:52:15 -- scripts/common.sh@337 -- # IFS=.-: 00:02:09.488 13:52:15 -- scripts/common.sh@337 -- # read -ra ver2 00:02:09.488 13:52:15 -- scripts/common.sh@338 -- # local 'op=<' 00:02:09.488 13:52:15 -- scripts/common.sh@340 -- # ver1_l=2 00:02:09.488 13:52:15 -- scripts/common.sh@341 -- # ver2_l=1 00:02:09.488 13:52:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:09.488 13:52:15 -- scripts/common.sh@344 -- # case "$op" in 00:02:09.488 13:52:15 -- scripts/common.sh@345 -- # : 1 00:02:09.488 13:52:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:09.488 13:52:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:09.488 13:52:15 -- scripts/common.sh@365 -- # decimal 1 00:02:09.488 13:52:15 -- scripts/common.sh@353 -- # local d=1 00:02:09.488 13:52:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:09.488 13:52:15 -- scripts/common.sh@355 -- # echo 1 00:02:09.488 13:52:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:09.488 13:52:15 -- scripts/common.sh@366 -- # decimal 2 00:02:09.488 13:52:15 -- scripts/common.sh@353 -- # local d=2 00:02:09.488 13:52:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:09.488 13:52:15 -- scripts/common.sh@355 -- # echo 2 00:02:09.488 13:52:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:09.488 13:52:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:09.488 13:52:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:09.488 13:52:15 -- scripts/common.sh@368 -- # return 0 00:02:09.488 13:52:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:09.488 13:52:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.488 --rc genhtml_branch_coverage=1 00:02:09.488 --rc genhtml_function_coverage=1 00:02:09.488 --rc genhtml_legend=1 00:02:09.488 --rc geninfo_all_blocks=1 00:02:09.488 --rc geninfo_unexecuted_blocks=1 00:02:09.488 00:02:09.488 ' 00:02:09.488 13:52:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.488 --rc genhtml_branch_coverage=1 00:02:09.488 --rc genhtml_function_coverage=1 00:02:09.488 --rc genhtml_legend=1 00:02:09.488 --rc geninfo_all_blocks=1 00:02:09.488 --rc geninfo_unexecuted_blocks=1 00:02:09.488 00:02:09.488 ' 00:02:09.488 13:52:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.488 --rc genhtml_branch_coverage=1 00:02:09.488 --rc genhtml_function_coverage=1 00:02:09.488 --rc genhtml_legend=1 00:02:09.488 --rc geninfo_all_blocks=1 00:02:09.488 --rc geninfo_unexecuted_blocks=1 00:02:09.488 00:02:09.488 ' 00:02:09.488 13:52:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:09.488 --rc genhtml_branch_coverage=1 00:02:09.488 --rc genhtml_function_coverage=1 00:02:09.488 --rc genhtml_legend=1 00:02:09.488 --rc geninfo_all_blocks=1 00:02:09.488 --rc geninfo_unexecuted_blocks=1 00:02:09.488 00:02:09.488 ' 00:02:09.488 13:52:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:09.488 13:52:15 -- nvmf/common.sh@7 -- # uname -s 00:02:09.488 13:52:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:09.488 13:52:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:09.488 13:52:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:09.488 13:52:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:09.488 13:52:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:09.488 13:52:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:09.488 13:52:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:09.488 13:52:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:09.488 13:52:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:09.488 13:52:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:09.488 13:52:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:09.488 13:52:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:09.488 13:52:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:09.488 13:52:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:09.488 13:52:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:09.488 13:52:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:09.488 13:52:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:09.488 13:52:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:09.749 13:52:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:09.749 13:52:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.749 13:52:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.749 13:52:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.749 13:52:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.749 13:52:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.749 13:52:15 -- paths/export.sh@5 -- # export PATH 00:02:09.749 13:52:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.749 13:52:15 -- nvmf/common.sh@51 -- # : 0 00:02:09.749 13:52:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:09.749 13:52:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:09.749 13:52:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:09.749 13:52:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:09.749 13:52:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:09.749 13:52:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:09.749 13:52:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:09.749 13:52:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:09.749 13:52:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:09.749 13:52:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:09.749 13:52:15 -- spdk/autotest.sh@32 -- # uname -s 00:02:09.749 13:52:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:09.749 13:52:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:09.749 13:52:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.749 13:52:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:09.749 13:52:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.749 13:52:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:09.749 13:52:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:09.749 13:52:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:09.749 13:52:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:09.749 13:52:15 -- spdk/autotest.sh@48 -- # udevadm_pid=2464736 00:02:09.749 13:52:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:09.749 13:52:15 -- pm/common@17 -- # local monitor 00:02:09.749 13:52:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.749 13:52:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.749 13:52:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.749 13:52:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.749 13:52:15 -- pm/common@21 -- # date +%s 00:02:09.749 13:52:15 -- pm/common@21 -- # date +%s 00:02:09.749 13:52:15 -- pm/common@25 -- # sleep 1 00:02:09.749 13:52:15 -- pm/common@21 -- # date +%s 00:02:09.749 13:52:15 -- pm/common@21 -- # date +%s 00:02:09.749 13:52:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733403135 00:02:09.749 13:52:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733403135 00:02:09.749 13:52:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733403135 00:02:09.749 13:52:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733403135 00:02:09.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733403135_collect-vmstat.pm.log 00:02:09.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733403135_collect-cpu-load.pm.log 00:02:09.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733403135_collect-cpu-temp.pm.log 00:02:09.749 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733403135_collect-bmc-pm.bmc.pm.log 00:02:10.690 13:52:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:10.690 13:52:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:10.690 13:52:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:10.690 13:52:16 -- common/autotest_common.sh@10 -- # set +x 00:02:10.690 13:52:16 -- spdk/autotest.sh@59 -- # create_test_list 00:02:10.690 13:52:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:10.690 13:52:16 -- common/autotest_common.sh@10 -- # set +x 00:02:10.690 13:52:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:10.690 13:52:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.690 13:52:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.690 13:52:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.690 13:52:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.690 13:52:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:10.690 13:52:16 -- common/autotest_common.sh@1457 -- # uname 00:02:10.690 13:52:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:10.690 13:52:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:10.690 13:52:16 -- common/autotest_common.sh@1477 -- # uname 00:02:10.690 13:52:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:10.690 13:52:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:10.690 13:52:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:10.690 lcov: LCOV version 1.15 00:02:10.690 13:52:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:25.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:25.599 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:43.822 13:52:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:43.822 13:52:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:43.822 13:52:47 -- common/autotest_common.sh@10 -- # set +x 00:02:43.822 13:52:47 -- spdk/autotest.sh@78 -- # rm -f 00:02:43.823 13:52:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.764 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:44.764 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:45.024 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:45.024 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:45.024 13:52:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:45.024 13:52:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:45.024 13:52:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:45.024 13:52:51 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:02:45.024 13:52:51 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:02:45.024 13:52:51 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:02:45.024 13:52:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:02:45.024 13:52:51 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:02:45.024 13:52:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:02:45.024 13:52:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:45.024 13:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:45.024 13:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.024 13:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:45.024 13:52:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:45.024 13:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:45.024 13:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:45.024 13:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:45.024 13:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:45.024 13:52:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.024 No valid GPT data, bailing 00:02:45.024 13:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.024 13:52:51 -- scripts/common.sh@394 -- # pt= 00:02:45.024 13:52:51 -- scripts/common.sh@395 -- # return 1 00:02:45.024 13:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.285 1+0 records in 00:02:45.285 1+0 records out 00:02:45.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488694 s, 215 MB/s 00:02:45.285 13:52:51 -- spdk/autotest.sh@105 -- # sync 00:02:45.285 13:52:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.285 13:52:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.285 13:52:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.279 13:52:59 -- spdk/autotest.sh@111 -- # uname -s 00:02:55.280 13:52:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:55.280 13:52:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:55.280 13:52:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:57.193 Hugepages 00:02:57.193 node hugesize free / total 00:02:57.193 node0 1048576kB 0 / 0 00:02:57.193 node0 2048kB 0 / 0 00:02:57.193 node1 1048576kB 0 / 0 00:02:57.193 node1 2048kB 0 / 0 00:02:57.193 00:02:57.193 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.193 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:57.193 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:57.193 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:57.193 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:57.193 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:57.453 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:57.453 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:57.453 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:57.453 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:57.453 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:57.453 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:57.453 13:53:03 -- spdk/autotest.sh@117 -- # uname -s 00:02:57.453 13:53:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:57.453 13:53:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:57.453 13:53:03 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.756 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:00.756 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:02.669 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:02.669 13:53:08 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:03.609 13:53:09 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:03.609 13:53:09 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:03.609 13:53:09 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:03.869 13:53:09 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:03.869 13:53:09 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:03.869 13:53:09 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:03.869 13:53:09 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:03.869 13:53:09 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:03.869 13:53:09 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:03.869 13:53:09 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:03.869 13:53:09 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:03.869 13:53:09 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.167 Waiting for block devices as requested 00:03:07.167 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:07.427 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:07.427 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:07.427 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:07.688 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:07.688 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:07.688 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:07.949 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:07.949 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:07.949 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:08.210 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:08.210 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:08.210 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:08.471 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:08.471 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:08.471 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:08.731 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:08.731 13:53:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:08.731 13:53:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:03:08.731 13:53:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:08.731 13:53:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:08.731 13:53:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:08.731 13:53:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:08.731 13:53:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:03:08.731 13:53:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:08.731 13:53:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:08.731 13:53:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:08.731 13:53:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:08.731 13:53:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:08.731 13:53:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:08.731 13:53:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:08.731 13:53:14 -- common/autotest_common.sh@1543 -- # continue 00:03:08.731 13:53:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:08.731 13:53:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:08.731 13:53:14 -- common/autotest_common.sh@10 -- # set +x 00:03:08.731 13:53:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:08.731 13:53:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:08.731 13:53:14 -- common/autotest_common.sh@10 -- # set +x 00:03:08.731 13:53:14 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.949 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:12.949 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:12.949 13:53:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:12.949 13:53:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:12.949 13:53:18 -- common/autotest_common.sh@10 -- # set +x 00:03:12.949 13:53:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:12.949 13:53:18 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:12.949 13:53:18 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:12.949 13:53:18 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:12.949 13:53:18 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:12.949 13:53:18 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:12.949 13:53:18 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:12.949 13:53:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:12.949 13:53:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:12.949 13:53:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:12.949 13:53:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:12.949 13:53:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:12.949 13:53:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:12.949 13:53:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:12.949 13:53:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:12.949 13:53:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:12.949 13:53:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:03:12.949 13:53:18 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:03:12.949 13:53:18 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:12.949 13:53:18 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:03:12.949 13:53:18 -- common/autotest_common.sh@1572 -- # return 0 00:03:12.949 13:53:18 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:03:12.949 13:53:18 -- common/autotest_common.sh@1580 -- # return 0 00:03:12.949 13:53:18 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:12.949 13:53:18 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:12.949 13:53:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:12.949 13:53:18 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:12.949 13:53:18 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:12.949 13:53:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.949 13:53:18 -- common/autotest_common.sh@10 -- # set +x 00:03:12.949 13:53:18 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:12.949 13:53:18 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:12.949 13:53:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:12.949 13:53:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:12.949 13:53:18 -- common/autotest_common.sh@10 -- # set +x 00:03:12.949 ************************************ 00:03:12.949 START TEST env 00:03:12.949 ************************************ 00:03:12.949 13:53:18 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:12.949 * Looking for test storage... 00:03:12.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:12.949 13:53:18 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:12.949 13:53:18 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:12.949 13:53:18 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:12.949 13:53:18 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:12.949 13:53:18 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:12.949 13:53:18 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:12.949 13:53:18 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:12.949 13:53:18 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:12.949 13:53:18 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:12.949 13:53:18 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:12.949 13:53:18 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:12.949 13:53:18 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:12.949 13:53:18 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:12.949 13:53:18 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:12.949 13:53:18 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:12.949 13:53:18 env -- scripts/common.sh@344 -- # case "$op" in 00:03:12.949 13:53:18 env -- scripts/common.sh@345 -- # : 1 00:03:12.949 13:53:18 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:12.949 13:53:18 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:12.949 13:53:19 env -- scripts/common.sh@365 -- # decimal 1 00:03:12.949 13:53:19 env -- scripts/common.sh@353 -- # local d=1 00:03:12.949 13:53:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:12.949 13:53:19 env -- scripts/common.sh@355 -- # echo 1 00:03:12.949 13:53:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:12.949 13:53:19 env -- scripts/common.sh@366 -- # decimal 2 00:03:12.949 13:53:19 env -- scripts/common.sh@353 -- # local d=2 00:03:12.949 13:53:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:12.949 13:53:19 env -- scripts/common.sh@355 -- # echo 2 00:03:12.949 13:53:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:12.949 13:53:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:12.949 13:53:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:12.949 13:53:19 env -- scripts/common.sh@368 -- # return 0 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.949 --rc genhtml_branch_coverage=1 00:03:12.949 --rc genhtml_function_coverage=1 00:03:12.949 --rc genhtml_legend=1 00:03:12.949 --rc geninfo_all_blocks=1 00:03:12.949 --rc geninfo_unexecuted_blocks=1 00:03:12.949 00:03:12.949 ' 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.949 --rc genhtml_branch_coverage=1 00:03:12.949 --rc genhtml_function_coverage=1 00:03:12.949 --rc genhtml_legend=1 00:03:12.949 --rc geninfo_all_blocks=1 00:03:12.949 --rc geninfo_unexecuted_blocks=1 00:03:12.949 00:03:12.949 ' 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.949 --rc genhtml_branch_coverage=1 00:03:12.949 --rc genhtml_function_coverage=1 00:03:12.949 --rc genhtml_legend=1 00:03:12.949 --rc geninfo_all_blocks=1 00:03:12.949 --rc geninfo_unexecuted_blocks=1 00:03:12.949 00:03:12.949 ' 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:12.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.949 --rc genhtml_branch_coverage=1 00:03:12.949 --rc genhtml_function_coverage=1 00:03:12.949 --rc genhtml_legend=1 00:03:12.949 --rc geninfo_all_blocks=1 00:03:12.949 --rc geninfo_unexecuted_blocks=1 00:03:12.949 00:03:12.949 ' 00:03:12.949 13:53:19 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:12.949 13:53:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:12.949 13:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.949 ************************************ 00:03:12.949 START TEST env_memory 00:03:12.949 ************************************ 00:03:12.949 13:53:19 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:12.949 00:03:12.949 00:03:12.949 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.949 http://cunit.sourceforge.net/ 00:03:12.949 00:03:12.949 00:03:12.949 Suite: memory 00:03:12.949 Test: alloc and free memory map ...[2024-12-05 13:53:19.108669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:12.949 passed 00:03:12.950 Test: mem map translation ...[2024-12-05 13:53:19.134302] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:12.950 [2024-12-05 13:53:19.134331] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:12.950 [2024-12-05 13:53:19.134377] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:12.950 [2024-12-05 13:53:19.134390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:12.950 passed 00:03:12.950 Test: mem map registration ...[2024-12-05 13:53:19.189586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:12.950 [2024-12-05 13:53:19.189626] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:12.950 passed 00:03:13.210 Test: mem map adjacent registrations ...passed 00:03:13.210 00:03:13.210 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.210 suites 1 1 n/a 0 0 00:03:13.210 tests 4 4 4 0 0 00:03:13.210 asserts 152 152 152 0 n/a 00:03:13.210 00:03:13.210 Elapsed time = 0.196 seconds 00:03:13.210 00:03:13.210 real 0m0.211s 00:03:13.210 user 0m0.198s 00:03:13.210 sys 0m0.013s 00:03:13.210 13:53:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:13.210 13:53:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:13.210 ************************************ 00:03:13.210 END TEST env_memory 00:03:13.210 ************************************ 00:03:13.210 13:53:19 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:13.210 13:53:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:13.210 13:53:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:13.210 13:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:03:13.210 ************************************ 00:03:13.210 START TEST env_vtophys 00:03:13.210 ************************************ 00:03:13.210 13:53:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:13.210 EAL: lib.eal log level changed from notice to debug 00:03:13.210 EAL: Detected lcore 0 as core 0 on socket 0 00:03:13.210 EAL: Detected lcore 1 as core 1 on socket 0 00:03:13.210 EAL: Detected lcore 2 as core 2 on socket 0 00:03:13.210 EAL: Detected lcore 3 as core 3 on socket 0 00:03:13.210 EAL: Detected lcore 4 as core 4 on socket 0 00:03:13.210 EAL: Detected lcore 5 as core 5 on socket 0 00:03:13.210 EAL: Detected lcore 6 as core 6 on socket 0 00:03:13.210 EAL: Detected lcore 7 as core 7 on socket 0 00:03:13.210 EAL: Detected lcore 8 as core 8 on socket 0 00:03:13.210 EAL: Detected lcore 9 as core 9 on socket 0 00:03:13.210 EAL: Detected lcore 10 as core 10 on socket 0 00:03:13.210 EAL: Detected lcore 11 as core 11 on socket 0 00:03:13.210 EAL: Detected lcore 12 as core 12 on socket 0 00:03:13.210 EAL: Detected lcore 13 as core 13 on socket 0 00:03:13.210 EAL: Detected lcore 14 as core 14 on socket 0 00:03:13.210 EAL: Detected lcore 15 as core 15 on socket 0 00:03:13.210 EAL: Detected lcore 16 as core 16 on socket 0 00:03:13.210 EAL: Detected lcore 17 as core 17 on socket 0 00:03:13.210 EAL: Detected lcore 18 as core 18 on socket 0 00:03:13.210 EAL: Detected lcore 19 as core 19 on socket 0 00:03:13.210 EAL: Detected lcore 20 as core 20 on socket 0 00:03:13.210 EAL: Detected lcore 21 as core 21 on socket 0 00:03:13.210 EAL: Detected lcore 22 as core 22 on socket 0 00:03:13.210 EAL: Detected lcore 23 as core 23 on socket 0 00:03:13.210 EAL: Detected lcore 24 as core 24 on socket 0 00:03:13.210 EAL: Detected lcore 25 as core 25 on socket 0 00:03:13.210 EAL: Detected lcore 26 as core 26 on socket 0 00:03:13.210 EAL: Detected lcore 27 as core 27 on socket 0 00:03:13.210 EAL: Detected lcore 28 as core 28 on socket 0 00:03:13.210 EAL: Detected lcore 29 as core 29 on socket 0 00:03:13.210 EAL: Detected lcore 30 as core 30 on socket 0 00:03:13.210 EAL: Detected lcore 31 as core 31 on socket 0 00:03:13.210 EAL: Detected lcore 32 as core 32 on socket 0 00:03:13.210 EAL: Detected lcore 33 as core 33 on socket 0 00:03:13.210 EAL: Detected lcore 34 as core 34 on socket 0 00:03:13.210 EAL: Detected lcore 35 as core 35 on socket 0 00:03:13.210 EAL: Detected lcore 36 as core 0 on socket 1 00:03:13.211 EAL: Detected lcore 37 as core 1 on socket 1 00:03:13.211 EAL: Detected lcore 38 as core 2 on socket 1 00:03:13.211 EAL: Detected lcore 39 as core 3 on socket 1 00:03:13.211 EAL: Detected lcore 40 as core 4 on socket 1 00:03:13.211 EAL: Detected lcore 41 as core 5 on socket 1 00:03:13.211 EAL: Detected lcore 42 as core 6 on socket 1 00:03:13.211 EAL: Detected lcore 43 as core 7 on socket 1 00:03:13.211 EAL: Detected lcore 44 as core 8 on socket 1 00:03:13.211 EAL: Detected lcore 45 as core 9 on socket 1 00:03:13.211 EAL: Detected lcore 46 as core 10 on socket 1 00:03:13.211 EAL: Detected lcore 47 as core 11 on socket 1 00:03:13.211 EAL: Detected lcore 48 as core 12 on socket 1 00:03:13.211 EAL: Detected lcore 49 as core 13 on socket 1 00:03:13.211 EAL: Detected lcore 50 as core 14 on socket 1 00:03:13.211 EAL: Detected lcore 51 as core 15 on socket 1 00:03:13.211 EAL: Detected lcore 52 as core 16 on socket 1 00:03:13.211 EAL: Detected lcore 53 as core 17 on socket 1 00:03:13.211 EAL: Detected lcore 54 as core 18 on socket 1 00:03:13.211 EAL: Detected lcore 55 as core 19 on socket 1 00:03:13.211 EAL: Detected lcore 56 as core 20 on socket 1 00:03:13.211 EAL: Detected lcore 57 as core 21 on socket 1 00:03:13.211 EAL: Detected lcore 58 as core 22 on socket 1 00:03:13.211 EAL: Detected lcore 59 as core 23 on socket 1 00:03:13.211 EAL: Detected lcore 60 as core 24 on socket 1 00:03:13.211 EAL: Detected lcore 61 as core 25 on socket 1 00:03:13.211 EAL: Detected lcore 62 as core 26 on socket 1 00:03:13.211 EAL: Detected lcore 63 as core 27 on socket 1 00:03:13.211 EAL: Detected lcore 64 as core 28 on socket 1 00:03:13.211 EAL: Detected lcore 65 as core 29 on socket 1 00:03:13.211 EAL: Detected lcore 66 as core 30 on socket 1 00:03:13.211 EAL: Detected lcore 67 as core 31 on socket 1 00:03:13.211 EAL: Detected lcore 68 as core 32 on socket 1 00:03:13.211 EAL: Detected lcore 69 as core 33 on socket 1 00:03:13.211 EAL: Detected lcore 70 as core 34 on socket 1 00:03:13.211 EAL: Detected lcore 71 as core 35 on socket 1 00:03:13.211 EAL: Detected lcore 72 as core 0 on socket 0 00:03:13.211 EAL: Detected lcore 73 as core 1 on socket 0 00:03:13.211 EAL: Detected lcore 74 as core 2 on socket 0 00:03:13.211 EAL: Detected lcore 75 as core 3 on socket 0 00:03:13.211 EAL: Detected lcore 76 as core 4 on socket 0 00:03:13.211 EAL: Detected lcore 77 as core 5 on socket 0 00:03:13.211 EAL: Detected lcore 78 as core 6 on socket 0 00:03:13.211 EAL: Detected lcore 79 as core 7 on socket 0 00:03:13.211 EAL: Detected lcore 80 as core 8 on socket 0 00:03:13.211 EAL: Detected lcore 81 as core 9 on socket 0 00:03:13.211 EAL: Detected lcore 82 as core 10 on socket 0 00:03:13.211 EAL: Detected lcore 83 as core 11 on socket 0 00:03:13.211 EAL: Detected lcore 84 as core 12 on socket 0 00:03:13.211 EAL: Detected lcore 85 as core 13 on socket 0 00:03:13.211 EAL: Detected lcore 86 as core 14 on socket 0 00:03:13.211 EAL: Detected lcore 87 as core 15 on socket 0 00:03:13.211 EAL: Detected lcore 88 as core 16 on socket 0 00:03:13.211 EAL: Detected lcore 89 as core 17 on socket 0 00:03:13.211 EAL: Detected lcore 90 as core 18 on socket 0 00:03:13.211 EAL: Detected lcore 91 as core 19 on socket 0 00:03:13.211 EAL: Detected lcore 92 as core 20 on socket 0 00:03:13.211 EAL: Detected lcore 93 as core 21 on socket 0 00:03:13.211 EAL: Detected lcore 94 as core 22 on socket 0 00:03:13.211 EAL: Detected lcore 95 as core 23 on socket 0 00:03:13.211 EAL: Detected lcore 96 as core 24 on socket 0 00:03:13.211 EAL: Detected lcore 97 as core 25 on socket 0 00:03:13.211 EAL: Detected lcore 98 as core 26 on socket 0 00:03:13.211 EAL: Detected lcore 99 as core 27 on socket 0 00:03:13.211 EAL: Detected lcore 100 as core 28 on socket 0 00:03:13.211 EAL: Detected lcore 101 as core 29 on socket 0 00:03:13.211 EAL: Detected lcore 102 as core 30 on socket 0 00:03:13.211 EAL: Detected lcore 103 as core 31 on socket 0 00:03:13.211 EAL: Detected lcore 104 as core 32 on socket 0 00:03:13.211 EAL: Detected lcore 105 as core 33 on socket 0 00:03:13.211 EAL: Detected lcore 106 as core 34 on socket 0 00:03:13.211 EAL: Detected lcore 107 as core 35 on socket 0 00:03:13.211 EAL: Detected lcore 108 as core 0 on socket 1 00:03:13.211 EAL: Detected lcore 109 as core 1 on socket 1 00:03:13.211 EAL: Detected lcore 110 as core 2 on socket 1 00:03:13.211 EAL: Detected lcore 111 as core 3 on socket 1 00:03:13.211 EAL: Detected lcore 112 as core 4 on socket 1 00:03:13.211 EAL: Detected lcore 113 as core 5 on socket 1 00:03:13.211 EAL: Detected lcore 114 as core 6 on socket 1 00:03:13.211 EAL: Detected lcore 115 as core 7 on socket 1 00:03:13.211 EAL: Detected lcore 116 as core 8 on socket 1 00:03:13.211 EAL: Detected lcore 117 as core 9 on socket 1 00:03:13.211 EAL: Detected lcore 118 as core 10 on socket 1 00:03:13.211 EAL: Detected lcore 119 as core 11 on socket 1 00:03:13.211 EAL: Detected lcore 120 as core 12 on socket 1 00:03:13.211 EAL: Detected lcore 121 as core 13 on socket 1 00:03:13.211 EAL: Detected lcore 122 as core 14 on socket 1 00:03:13.211 EAL: Detected lcore 123 as core 15 on socket 1 00:03:13.211 EAL: Detected lcore 124 as core 16 on socket 1 00:03:13.211 EAL: Detected lcore 125 as core 17 on socket 1 00:03:13.211 EAL: Detected lcore 126 as core 18 on socket 1 00:03:13.211 EAL: Detected lcore 127 as core 19 on socket 1 00:03:13.211 EAL: Skipped lcore 128 as core 20 on socket 1 00:03:13.211 EAL: Skipped lcore 129 as core 21 on socket 1 00:03:13.211 EAL: Skipped lcore 130 as core 22 on socket 1 00:03:13.211 EAL: Skipped lcore 131 as core 23 on socket 1 00:03:13.211 EAL: Skipped lcore 132 as core 24 on socket 1 00:03:13.211 EAL: Skipped lcore 133 as core 25 on socket 1 00:03:13.211 EAL: Skipped lcore 134 as core 26 on socket 1 00:03:13.211 EAL: Skipped lcore 135 as core 27 on socket 1 00:03:13.211 EAL: Skipped lcore 136 as core 28 on socket 1 00:03:13.211 EAL: Skipped lcore 137 as core 29 on socket 1 00:03:13.211 EAL: Skipped lcore 138 as core 30 on socket 1 00:03:13.211 EAL: Skipped lcore 139 as core 31 on socket 1 00:03:13.211 EAL: Skipped lcore 140 as core 32 on socket 1 00:03:13.211 EAL: Skipped lcore 141 as core 33 on socket 1 00:03:13.211 EAL: Skipped lcore 142 as core 34 on socket 1 00:03:13.211 EAL: Skipped lcore 143 as core 35 on socket 1 00:03:13.211 EAL: Maximum logical cores by configuration: 128 00:03:13.211 EAL: Detected CPU lcores: 128 00:03:13.211 EAL: Detected NUMA nodes: 2 00:03:13.211 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:13.211 EAL: Detected shared linkage of DPDK 00:03:13.211 EAL: No shared files mode enabled, IPC will be disabled 00:03:13.211 EAL: Bus pci wants IOVA as 'DC' 00:03:13.211 EAL: Buses did not request a specific IOVA mode. 00:03:13.211 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:13.211 EAL: Selected IOVA mode 'VA' 00:03:13.211 EAL: Probing VFIO support... 00:03:13.211 EAL: IOMMU type 1 (Type 1) is supported 00:03:13.211 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:13.211 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:13.211 EAL: VFIO support initialized 00:03:13.211 EAL: Ask a virtual area of 0x2e000 bytes 00:03:13.211 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:13.211 EAL: Setting up physically contiguous memory... 00:03:13.211 EAL: Setting maximum number of open files to 524288 00:03:13.211 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:13.211 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:13.211 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:13.211 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.211 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:13.211 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:13.211 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.211 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:13.211 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:13.211 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:13.212 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:13.212 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:13.212 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:13.212 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:13.212 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:13.212 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:13.212 EAL: Ask a virtual area of 0x61000 bytes 00:03:13.212 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:13.212 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:13.212 EAL: Ask a virtual area of 0x400000000 bytes 00:03:13.212 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:13.212 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:13.212 EAL: Hugepages will be freed exactly as allocated. 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: TSC frequency is ~2400000 KHz 00:03:13.212 EAL: Main lcore 0 is ready (tid=7f2fdf448a00;cpuset=[0]) 00:03:13.212 EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.212 EAL: Restoring previous memory policy: 0 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was expanded by 2MB 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:13.212 EAL: Mem event callback 'spdk:(nil)' registered 00:03:13.212 00:03:13.212 00:03:13.212 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.212 http://cunit.sourceforge.net/ 00:03:13.212 00:03:13.212 00:03:13.212 Suite: components_suite 00:03:13.212 Test: vtophys_malloc_test ...passed 00:03:13.212 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.212 EAL: Restoring previous memory policy: 4 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was expanded by 4MB 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was shrunk by 4MB 00:03:13.212 EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.212 EAL: Restoring previous memory policy: 4 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was expanded by 6MB 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was shrunk by 6MB 00:03:13.212 EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.212 EAL: Restoring previous memory policy: 4 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was expanded by 10MB 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was shrunk by 10MB 00:03:13.212 EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.212 EAL: Restoring previous memory policy: 4 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was expanded by 18MB 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was shrunk by 18MB 00:03:13.212 EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.212 EAL: Restoring previous memory policy: 4 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was expanded by 34MB 00:03:13.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.212 EAL: request: mp_malloc_sync 00:03:13.212 EAL: No shared files mode enabled, IPC is disabled 00:03:13.212 EAL: Heap on socket 0 was shrunk by 34MB 00:03:13.212 EAL: Trying to obtain current memory policy. 00:03:13.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.473 EAL: Restoring previous memory policy: 4 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was expanded by 66MB 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was shrunk by 66MB 00:03:13.473 EAL: Trying to obtain current memory policy. 00:03:13.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.473 EAL: Restoring previous memory policy: 4 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was expanded by 130MB 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was shrunk by 130MB 00:03:13.473 EAL: Trying to obtain current memory policy. 00:03:13.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.473 EAL: Restoring previous memory policy: 4 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was expanded by 258MB 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was shrunk by 258MB 00:03:13.473 EAL: Trying to obtain current memory policy. 00:03:13.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.473 EAL: Restoring previous memory policy: 4 00:03:13.473 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.473 EAL: request: mp_malloc_sync 00:03:13.473 EAL: No shared files mode enabled, IPC is disabled 00:03:13.473 EAL: Heap on socket 0 was expanded by 514MB 00:03:13.733 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.733 EAL: request: mp_malloc_sync 00:03:13.733 EAL: No shared files mode enabled, IPC is disabled 00:03:13.733 EAL: Heap on socket 0 was shrunk by 514MB 00:03:13.733 EAL: Trying to obtain current memory policy. 00:03:13.733 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.733 EAL: Restoring previous memory policy: 4 00:03:13.733 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.733 EAL: request: mp_malloc_sync 00:03:13.733 EAL: No shared files mode enabled, IPC is disabled 00:03:13.733 EAL: Heap on socket 0 was expanded by 1026MB 00:03:13.993 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.993 EAL: request: mp_malloc_sync 00:03:13.993 EAL: No shared files mode enabled, IPC is disabled 00:03:13.993 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:13.993 passed 00:03:13.993 00:03:13.993 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.993 suites 1 1 n/a 0 0 00:03:13.993 tests 2 2 2 0 0 00:03:13.993 asserts 497 497 497 0 n/a 00:03:13.993 00:03:13.993 Elapsed time = 0.707 seconds 00:03:13.993 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.993 EAL: request: mp_malloc_sync 00:03:13.993 EAL: No shared files mode enabled, IPC is disabled 00:03:13.993 EAL: Heap on socket 0 was shrunk by 2MB 00:03:13.993 EAL: No shared files mode enabled, IPC is disabled 00:03:13.993 EAL: No shared files mode enabled, IPC is disabled 00:03:13.993 EAL: No shared files mode enabled, IPC is disabled 00:03:13.993 00:03:13.993 real 0m0.866s 00:03:13.993 user 0m0.449s 00:03:13.993 sys 0m0.380s 00:03:13.993 13:53:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:13.993 13:53:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:13.993 ************************************ 00:03:13.993 END TEST env_vtophys 00:03:13.993 ************************************ 00:03:13.993 13:53:20 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:13.993 13:53:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:13.993 13:53:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:13.993 13:53:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:14.254 ************************************ 00:03:14.254 START TEST env_pci 00:03:14.254 ************************************ 00:03:14.254 13:53:20 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:14.254 00:03:14.254 00:03:14.254 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.254 http://cunit.sourceforge.net/ 00:03:14.254 00:03:14.254 00:03:14.254 Suite: pci 00:03:14.254 Test: pci_hook ...[2024-12-05 13:53:20.315923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2483688 has claimed it 00:03:14.254 EAL: Cannot find device (10000:00:01.0) 00:03:14.254 EAL: Failed to attach device on primary process 00:03:14.254 passed 00:03:14.254 00:03:14.254 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.254 suites 1 1 n/a 0 0 00:03:14.254 tests 1 1 1 0 0 00:03:14.254 asserts 25 25 25 0 n/a 00:03:14.254 00:03:14.254 Elapsed time = 0.031 seconds 00:03:14.254 00:03:14.254 real 0m0.052s 00:03:14.254 user 0m0.017s 00:03:14.254 sys 0m0.034s 00:03:14.254 13:53:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.254 13:53:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:14.254 ************************************ 00:03:14.254 END TEST env_pci 00:03:14.254 ************************************ 00:03:14.254 13:53:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:14.254 13:53:20 env -- env/env.sh@15 -- # uname 00:03:14.254 13:53:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:14.254 13:53:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:14.254 13:53:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:14.254 13:53:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:14.254 13:53:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.254 13:53:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:14.254 ************************************ 00:03:14.254 START TEST env_dpdk_post_init 00:03:14.254 ************************************ 00:03:14.254 13:53:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:14.254 EAL: Detected CPU lcores: 128 00:03:14.254 EAL: Detected NUMA nodes: 2 00:03:14.254 EAL: Detected shared linkage of DPDK 00:03:14.254 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:14.254 EAL: Selected IOVA mode 'VA' 00:03:14.254 EAL: VFIO support initialized 00:03:14.254 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:14.514 EAL: Using IOMMU type 1 (Type 1) 00:03:14.514 EAL: Ignore mapping IO port bar(1) 00:03:14.514 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:03:14.774 EAL: Ignore mapping IO port bar(1) 00:03:14.774 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:03:15.034 EAL: Ignore mapping IO port bar(1) 00:03:15.034 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:03:15.294 EAL: Ignore mapping IO port bar(1) 00:03:15.295 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:03:15.557 EAL: Ignore mapping IO port bar(1) 00:03:15.557 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:03:15.557 EAL: Ignore mapping IO port bar(1) 00:03:15.818 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:03:15.818 EAL: Ignore mapping IO port bar(1) 00:03:16.078 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:03:16.078 EAL: Ignore mapping IO port bar(1) 00:03:16.078 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:03:16.338 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:03:16.598 EAL: Ignore mapping IO port bar(1) 00:03:16.598 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:03:16.859 EAL: Ignore mapping IO port bar(1) 00:03:16.859 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:03:17.120 EAL: Ignore mapping IO port bar(1) 00:03:17.120 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:03:17.120 EAL: Ignore mapping IO port bar(1) 00:03:17.381 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:03:17.381 EAL: Ignore mapping IO port bar(1) 00:03:17.641 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:03:17.641 EAL: Ignore mapping IO port bar(1) 00:03:17.901 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:03:17.901 EAL: Ignore mapping IO port bar(1) 00:03:17.901 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:03:18.161 EAL: Ignore mapping IO port bar(1) 00:03:18.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:03:18.161 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:03:18.161 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:03:18.421 Starting DPDK initialization... 00:03:18.421 Starting SPDK post initialization... 00:03:18.421 SPDK NVMe probe 00:03:18.421 Attaching to 0000:65:00.0 00:03:18.421 Attached to 0000:65:00.0 00:03:18.421 Cleaning up... 00:03:20.345 00:03:20.345 real 0m5.741s 00:03:20.345 user 0m0.110s 00:03:20.345 sys 0m0.188s 00:03:20.345 13:53:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.345 13:53:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:20.345 ************************************ 00:03:20.345 END TEST env_dpdk_post_init 00:03:20.345 ************************************ 00:03:20.345 13:53:26 env -- env/env.sh@26 -- # uname 00:03:20.345 13:53:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:20.345 13:53:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:20.345 13:53:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.345 13:53:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.345 13:53:26 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.345 ************************************ 00:03:20.345 START TEST env_mem_callbacks 00:03:20.345 ************************************ 00:03:20.345 13:53:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:20.345 EAL: Detected CPU lcores: 128 00:03:20.345 EAL: Detected NUMA nodes: 2 00:03:20.345 EAL: Detected shared linkage of DPDK 00:03:20.345 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:20.345 EAL: Selected IOVA mode 'VA' 00:03:20.345 EAL: VFIO support initialized 00:03:20.345 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:20.345 00:03:20.345 00:03:20.345 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.345 http://cunit.sourceforge.net/ 00:03:20.345 00:03:20.345 00:03:20.345 Suite: memory 00:03:20.345 Test: test ... 00:03:20.345 register 0x200000200000 2097152 00:03:20.345 malloc 3145728 00:03:20.345 register 0x200000400000 4194304 00:03:20.345 buf 0x200000500000 len 3145728 PASSED 00:03:20.345 malloc 64 00:03:20.345 buf 0x2000004fff40 len 64 PASSED 00:03:20.345 malloc 4194304 00:03:20.345 register 0x200000800000 6291456 00:03:20.345 buf 0x200000a00000 len 4194304 PASSED 00:03:20.345 free 0x200000500000 3145728 00:03:20.345 free 0x2000004fff40 64 00:03:20.345 unregister 0x200000400000 4194304 PASSED 00:03:20.345 free 0x200000a00000 4194304 00:03:20.345 unregister 0x200000800000 6291456 PASSED 00:03:20.345 malloc 8388608 00:03:20.345 register 0x200000400000 10485760 00:03:20.345 buf 0x200000600000 len 8388608 PASSED 00:03:20.345 free 0x200000600000 8388608 00:03:20.345 unregister 0x200000400000 10485760 PASSED 00:03:20.345 passed 00:03:20.345 00:03:20.345 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.345 suites 1 1 n/a 0 0 00:03:20.345 tests 1 1 1 0 0 00:03:20.345 asserts 15 15 15 0 n/a 00:03:20.345 00:03:20.345 Elapsed time = 0.010 seconds 00:03:20.345 00:03:20.345 real 0m0.070s 00:03:20.345 user 0m0.024s 00:03:20.345 sys 0m0.047s 00:03:20.345 13:53:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.345 13:53:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:20.345 ************************************ 00:03:20.345 END TEST env_mem_callbacks 00:03:20.345 ************************************ 00:03:20.345 00:03:20.345 real 0m7.569s 00:03:20.345 user 0m1.072s 00:03:20.345 sys 0m1.053s 00:03:20.345 13:53:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.345 13:53:26 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.345 ************************************ 00:03:20.345 END TEST env 00:03:20.345 ************************************ 00:03:20.345 13:53:26 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:20.345 13:53:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.345 13:53:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.345 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:03:20.345 ************************************ 00:03:20.345 START TEST rpc 00:03:20.345 ************************************ 00:03:20.345 13:53:26 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:20.345 * Looking for test storage... 00:03:20.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:20.345 13:53:26 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:20.345 13:53:26 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:20.345 13:53:26 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:20.606 13:53:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:20.606 13:53:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.606 13:53:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:20.606 13:53:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:20.606 13:53:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:20.606 13:53:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:20.606 13:53:26 rpc -- scripts/common.sh@345 -- # : 1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:20.606 13:53:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.606 13:53:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@353 -- # local d=1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.606 13:53:26 rpc -- scripts/common.sh@355 -- # echo 1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:20.606 13:53:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@353 -- # local d=2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.606 13:53:26 rpc -- scripts/common.sh@355 -- # echo 2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:20.606 13:53:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:20.606 13:53:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:20.606 13:53:26 rpc -- scripts/common.sh@368 -- # return 0 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:20.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.606 --rc genhtml_branch_coverage=1 00:03:20.606 --rc genhtml_function_coverage=1 00:03:20.606 --rc genhtml_legend=1 00:03:20.606 --rc geninfo_all_blocks=1 00:03:20.606 --rc geninfo_unexecuted_blocks=1 00:03:20.606 00:03:20.606 ' 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:20.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.606 --rc genhtml_branch_coverage=1 00:03:20.606 --rc genhtml_function_coverage=1 00:03:20.606 --rc genhtml_legend=1 00:03:20.606 --rc geninfo_all_blocks=1 00:03:20.606 --rc geninfo_unexecuted_blocks=1 00:03:20.606 00:03:20.606 ' 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:20.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.606 --rc genhtml_branch_coverage=1 00:03:20.606 --rc genhtml_function_coverage=1 00:03:20.606 --rc genhtml_legend=1 00:03:20.606 --rc geninfo_all_blocks=1 00:03:20.606 --rc geninfo_unexecuted_blocks=1 00:03:20.606 00:03:20.606 ' 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:20.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.606 --rc genhtml_branch_coverage=1 00:03:20.606 --rc genhtml_function_coverage=1 00:03:20.606 --rc genhtml_legend=1 00:03:20.606 --rc geninfo_all_blocks=1 00:03:20.606 --rc geninfo_unexecuted_blocks=1 00:03:20.606 00:03:20.606 ' 00:03:20.606 13:53:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2485018 00:03:20.606 13:53:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:20.606 13:53:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2485018 00:03:20.606 13:53:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 2485018 ']' 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:20.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:20.606 13:53:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:20.606 [2024-12-05 13:53:26.735108] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:20.606 [2024-12-05 13:53:26.735185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485018 ] 00:03:20.606 [2024-12-05 13:53:26.828999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:20.606 [2024-12-05 13:53:26.881362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:20.606 [2024-12-05 13:53:26.881415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2485018' to capture a snapshot of events at runtime. 00:03:20.606 [2024-12-05 13:53:26.881423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:20.606 [2024-12-05 13:53:26.881431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:20.606 [2024-12-05 13:53:26.881437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2485018 for offline analysis/debug. 00:03:20.606 [2024-12-05 13:53:26.882216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.547 13:53:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:21.547 13:53:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:21.547 13:53:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.547 13:53:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.547 13:53:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:21.547 13:53:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:21.547 13:53:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.547 13:53:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.547 13:53:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.547 ************************************ 00:03:21.547 START TEST rpc_integrity 00:03:21.547 ************************************ 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:21.547 { 00:03:21.547 "name": "Malloc0", 00:03:21.547 "aliases": [ 00:03:21.547 "1b3cb3ea-000f-49e8-96ae-54b783c8ffed" 00:03:21.547 ], 00:03:21.547 "product_name": "Malloc disk", 00:03:21.547 "block_size": 512, 00:03:21.547 "num_blocks": 16384, 00:03:21.547 "uuid": "1b3cb3ea-000f-49e8-96ae-54b783c8ffed", 00:03:21.547 "assigned_rate_limits": { 00:03:21.547 "rw_ios_per_sec": 0, 00:03:21.547 "rw_mbytes_per_sec": 0, 00:03:21.547 "r_mbytes_per_sec": 0, 00:03:21.547 "w_mbytes_per_sec": 0 00:03:21.547 }, 00:03:21.547 "claimed": false, 00:03:21.547 "zoned": false, 00:03:21.547 "supported_io_types": { 00:03:21.547 "read": true, 00:03:21.547 "write": true, 00:03:21.547 "unmap": true, 00:03:21.547 "flush": true, 00:03:21.547 "reset": true, 00:03:21.547 "nvme_admin": false, 00:03:21.547 "nvme_io": false, 00:03:21.547 "nvme_io_md": false, 00:03:21.547 "write_zeroes": true, 00:03:21.547 "zcopy": true, 00:03:21.547 "get_zone_info": false, 00:03:21.547 "zone_management": false, 00:03:21.547 "zone_append": false, 00:03:21.547 "compare": false, 00:03:21.547 "compare_and_write": false, 00:03:21.547 "abort": true, 00:03:21.547 "seek_hole": false, 00:03:21.547 "seek_data": false, 00:03:21.547 "copy": true, 00:03:21.547 "nvme_iov_md": false 00:03:21.547 }, 00:03:21.547 "memory_domains": [ 00:03:21.547 { 00:03:21.547 "dma_device_id": "system", 00:03:21.547 "dma_device_type": 1 00:03:21.547 }, 00:03:21.547 { 00:03:21.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.547 "dma_device_type": 2 00:03:21.547 } 00:03:21.547 ], 00:03:21.547 "driver_specific": {} 00:03:21.547 } 00:03:21.547 ]' 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.547 [2024-12-05 13:53:27.733512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:21.547 [2024-12-05 13:53:27.733561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:21.547 [2024-12-05 13:53:27.733578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f1ae0 00:03:21.547 [2024-12-05 13:53:27.733586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:21.547 [2024-12-05 13:53:27.735169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:21.547 [2024-12-05 13:53:27.735203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:21.547 Passthru0 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.547 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.547 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:21.547 { 00:03:21.547 "name": "Malloc0", 00:03:21.547 "aliases": [ 00:03:21.547 "1b3cb3ea-000f-49e8-96ae-54b783c8ffed" 00:03:21.547 ], 00:03:21.547 "product_name": "Malloc disk", 00:03:21.547 "block_size": 512, 00:03:21.547 "num_blocks": 16384, 00:03:21.547 "uuid": "1b3cb3ea-000f-49e8-96ae-54b783c8ffed", 00:03:21.547 "assigned_rate_limits": { 00:03:21.547 "rw_ios_per_sec": 0, 00:03:21.547 "rw_mbytes_per_sec": 0, 00:03:21.547 "r_mbytes_per_sec": 0, 00:03:21.547 "w_mbytes_per_sec": 0 00:03:21.547 }, 00:03:21.547 "claimed": true, 00:03:21.547 "claim_type": "exclusive_write", 00:03:21.547 "zoned": false, 00:03:21.547 "supported_io_types": { 00:03:21.547 "read": true, 00:03:21.547 "write": true, 00:03:21.547 "unmap": true, 00:03:21.547 "flush": true, 00:03:21.547 "reset": true, 00:03:21.547 "nvme_admin": false, 00:03:21.547 "nvme_io": false, 00:03:21.547 "nvme_io_md": false, 00:03:21.547 "write_zeroes": true, 00:03:21.547 "zcopy": true, 00:03:21.547 "get_zone_info": false, 00:03:21.547 "zone_management": false, 00:03:21.547 "zone_append": false, 00:03:21.547 "compare": false, 00:03:21.547 "compare_and_write": false, 00:03:21.547 "abort": true, 00:03:21.547 "seek_hole": false, 00:03:21.547 "seek_data": false, 00:03:21.547 "copy": true, 00:03:21.547 "nvme_iov_md": false 00:03:21.547 }, 00:03:21.547 "memory_domains": [ 00:03:21.547 { 00:03:21.547 "dma_device_id": "system", 00:03:21.547 "dma_device_type": 1 00:03:21.547 }, 00:03:21.547 { 00:03:21.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.547 "dma_device_type": 2 00:03:21.547 } 00:03:21.547 ], 00:03:21.547 "driver_specific": {} 00:03:21.547 }, 00:03:21.547 { 00:03:21.547 "name": "Passthru0", 00:03:21.547 "aliases": [ 00:03:21.547 "4966eb4d-3c9f-583c-8ac3-5ca296536f03" 00:03:21.547 ], 00:03:21.547 "product_name": "passthru", 00:03:21.547 "block_size": 512, 00:03:21.547 "num_blocks": 16384, 00:03:21.547 "uuid": "4966eb4d-3c9f-583c-8ac3-5ca296536f03", 00:03:21.548 "assigned_rate_limits": { 00:03:21.548 "rw_ios_per_sec": 0, 00:03:21.548 "rw_mbytes_per_sec": 0, 00:03:21.548 "r_mbytes_per_sec": 0, 00:03:21.548 "w_mbytes_per_sec": 0 00:03:21.548 }, 00:03:21.548 "claimed": false, 00:03:21.548 "zoned": false, 00:03:21.548 "supported_io_types": { 00:03:21.548 "read": true, 00:03:21.548 "write": true, 00:03:21.548 "unmap": true, 00:03:21.548 "flush": true, 00:03:21.548 "reset": true, 00:03:21.548 "nvme_admin": false, 00:03:21.548 "nvme_io": false, 00:03:21.548 "nvme_io_md": false, 00:03:21.548 "write_zeroes": true, 00:03:21.548 "zcopy": true, 00:03:21.548 "get_zone_info": false, 00:03:21.548 "zone_management": false, 00:03:21.548 "zone_append": false, 00:03:21.548 "compare": false, 00:03:21.548 "compare_and_write": false, 00:03:21.548 "abort": true, 00:03:21.548 "seek_hole": false, 00:03:21.548 "seek_data": false, 00:03:21.548 "copy": true, 00:03:21.548 "nvme_iov_md": false 00:03:21.548 }, 00:03:21.548 "memory_domains": [ 00:03:21.548 { 00:03:21.548 "dma_device_id": "system", 00:03:21.548 "dma_device_type": 1 00:03:21.548 }, 00:03:21.548 { 00:03:21.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.548 "dma_device_type": 2 00:03:21.548 } 00:03:21.548 ], 00:03:21.548 "driver_specific": { 00:03:21.548 "passthru": { 00:03:21.548 "name": "Passthru0", 00:03:21.548 "base_bdev_name": "Malloc0" 00:03:21.548 } 00:03:21.548 } 00:03:21.548 } 00:03:21.548 ]' 00:03:21.548 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:21.548 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:21.548 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.548 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.548 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.548 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.808 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:21.808 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:21.808 13:53:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:21.808 00:03:21.808 real 0m0.299s 00:03:21.808 user 0m0.187s 00:03:21.808 sys 0m0.043s 00:03:21.808 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.808 13:53:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 ************************************ 00:03:21.808 END TEST rpc_integrity 00:03:21.808 ************************************ 00:03:21.808 13:53:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:21.808 13:53:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.808 13:53:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.808 13:53:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 ************************************ 00:03:21.808 START TEST rpc_plugins 00:03:21.808 ************************************ 00:03:21.808 13:53:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:21.808 13:53:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:21.808 13:53:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.808 13:53:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 13:53:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.808 13:53:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:21.808 13:53:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:21.808 13:53:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.808 13:53:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:21.808 { 00:03:21.808 "name": "Malloc1", 00:03:21.808 "aliases": [ 00:03:21.808 "7b4b7088-3eb9-48bd-8b11-f4683408aa61" 00:03:21.808 ], 00:03:21.808 "product_name": "Malloc disk", 00:03:21.808 "block_size": 4096, 00:03:21.808 "num_blocks": 256, 00:03:21.808 "uuid": "7b4b7088-3eb9-48bd-8b11-f4683408aa61", 00:03:21.808 "assigned_rate_limits": { 00:03:21.808 "rw_ios_per_sec": 0, 00:03:21.808 "rw_mbytes_per_sec": 0, 00:03:21.808 "r_mbytes_per_sec": 0, 00:03:21.808 "w_mbytes_per_sec": 0 00:03:21.808 }, 00:03:21.808 "claimed": false, 00:03:21.808 "zoned": false, 00:03:21.808 "supported_io_types": { 00:03:21.808 "read": true, 00:03:21.808 "write": true, 00:03:21.808 "unmap": true, 00:03:21.808 "flush": true, 00:03:21.808 "reset": true, 00:03:21.808 "nvme_admin": false, 00:03:21.808 "nvme_io": false, 00:03:21.808 "nvme_io_md": false, 00:03:21.808 "write_zeroes": true, 00:03:21.808 "zcopy": true, 00:03:21.808 "get_zone_info": false, 00:03:21.808 "zone_management": false, 00:03:21.808 "zone_append": false, 00:03:21.808 "compare": false, 00:03:21.808 "compare_and_write": false, 00:03:21.808 "abort": true, 00:03:21.808 "seek_hole": false, 00:03:21.808 "seek_data": false, 00:03:21.808 "copy": true, 00:03:21.808 "nvme_iov_md": false 00:03:21.808 }, 00:03:21.808 "memory_domains": [ 00:03:21.808 { 00:03:21.808 "dma_device_id": "system", 00:03:21.808 "dma_device_type": 1 00:03:21.808 }, 00:03:21.808 { 00:03:21.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.808 "dma_device_type": 2 00:03:21.808 } 00:03:21.808 ], 00:03:21.808 "driver_specific": {} 00:03:21.808 } 00:03:21.808 ]' 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.808 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:21.808 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:22.069 13:53:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:22.069 00:03:22.069 real 0m0.151s 00:03:22.069 user 0m0.090s 00:03:22.069 sys 0m0.025s 00:03:22.069 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.069 13:53:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:22.069 ************************************ 00:03:22.069 END TEST rpc_plugins 00:03:22.069 ************************************ 00:03:22.069 13:53:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:22.069 13:53:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.069 13:53:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.069 13:53:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.069 ************************************ 00:03:22.069 START TEST rpc_trace_cmd_test 00:03:22.069 ************************************ 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:22.069 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2485018", 00:03:22.069 "tpoint_group_mask": "0x8", 00:03:22.069 "iscsi_conn": { 00:03:22.069 "mask": "0x2", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "scsi": { 00:03:22.069 "mask": "0x4", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "bdev": { 00:03:22.069 "mask": "0x8", 00:03:22.069 "tpoint_mask": "0xffffffffffffffff" 00:03:22.069 }, 00:03:22.069 "nvmf_rdma": { 00:03:22.069 "mask": "0x10", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "nvmf_tcp": { 00:03:22.069 "mask": "0x20", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "ftl": { 00:03:22.069 "mask": "0x40", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "blobfs": { 00:03:22.069 "mask": "0x80", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "dsa": { 00:03:22.069 "mask": "0x200", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "thread": { 00:03:22.069 "mask": "0x400", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "nvme_pcie": { 00:03:22.069 "mask": "0x800", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "iaa": { 00:03:22.069 "mask": "0x1000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "nvme_tcp": { 00:03:22.069 "mask": "0x2000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "bdev_nvme": { 00:03:22.069 "mask": "0x4000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "sock": { 00:03:22.069 "mask": "0x8000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "blob": { 00:03:22.069 "mask": "0x10000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "bdev_raid": { 00:03:22.069 "mask": "0x20000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 }, 00:03:22.069 "scheduler": { 00:03:22.069 "mask": "0x40000", 00:03:22.069 "tpoint_mask": "0x0" 00:03:22.069 } 00:03:22.069 }' 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:22.069 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:22.330 00:03:22.330 real 0m0.238s 00:03:22.330 user 0m0.197s 00:03:22.330 sys 0m0.031s 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.330 13:53:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 ************************************ 00:03:22.330 END TEST rpc_trace_cmd_test 00:03:22.330 ************************************ 00:03:22.330 13:53:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:22.330 13:53:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:22.330 13:53:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:22.330 13:53:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.330 13:53:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.330 13:53:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 ************************************ 00:03:22.330 START TEST rpc_daemon_integrity 00:03:22.330 ************************************ 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:22.330 { 00:03:22.330 "name": "Malloc2", 00:03:22.330 "aliases": [ 00:03:22.330 "39ae5a41-6729-480d-bf87-175dfd70c37b" 00:03:22.330 ], 00:03:22.330 "product_name": "Malloc disk", 00:03:22.330 "block_size": 512, 00:03:22.330 "num_blocks": 16384, 00:03:22.330 "uuid": "39ae5a41-6729-480d-bf87-175dfd70c37b", 00:03:22.330 "assigned_rate_limits": { 00:03:22.330 "rw_ios_per_sec": 0, 00:03:22.330 "rw_mbytes_per_sec": 0, 00:03:22.330 "r_mbytes_per_sec": 0, 00:03:22.330 "w_mbytes_per_sec": 0 00:03:22.330 }, 00:03:22.330 "claimed": false, 00:03:22.330 "zoned": false, 00:03:22.330 "supported_io_types": { 00:03:22.330 "read": true, 00:03:22.330 "write": true, 00:03:22.330 "unmap": true, 00:03:22.330 "flush": true, 00:03:22.330 "reset": true, 00:03:22.330 "nvme_admin": false, 00:03:22.330 "nvme_io": false, 00:03:22.330 "nvme_io_md": false, 00:03:22.330 "write_zeroes": true, 00:03:22.330 "zcopy": true, 00:03:22.330 "get_zone_info": false, 00:03:22.330 "zone_management": false, 00:03:22.330 "zone_append": false, 00:03:22.330 "compare": false, 00:03:22.330 "compare_and_write": false, 00:03:22.330 "abort": true, 00:03:22.330 "seek_hole": false, 00:03:22.330 "seek_data": false, 00:03:22.330 "copy": true, 00:03:22.330 "nvme_iov_md": false 00:03:22.330 }, 00:03:22.330 "memory_domains": [ 00:03:22.330 { 00:03:22.330 "dma_device_id": "system", 00:03:22.330 "dma_device_type": 1 00:03:22.330 }, 00:03:22.330 { 00:03:22.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.330 "dma_device_type": 2 00:03:22.330 } 00:03:22.330 ], 00:03:22.330 "driver_specific": {} 00:03:22.330 } 00:03:22.330 ]' 00:03:22.330 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.590 [2024-12-05 13:53:28.672024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:22.590 [2024-12-05 13:53:28.672066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:22.590 [2024-12-05 13:53:28.672083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f2040 00:03:22.590 [2024-12-05 13:53:28.672091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:22.590 [2024-12-05 13:53:28.673581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:22.590 [2024-12-05 13:53:28.673616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:22.590 Passthru0 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.590 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:22.590 { 00:03:22.591 "name": "Malloc2", 00:03:22.591 "aliases": [ 00:03:22.591 "39ae5a41-6729-480d-bf87-175dfd70c37b" 00:03:22.591 ], 00:03:22.591 "product_name": "Malloc disk", 00:03:22.591 "block_size": 512, 00:03:22.591 "num_blocks": 16384, 00:03:22.591 "uuid": "39ae5a41-6729-480d-bf87-175dfd70c37b", 00:03:22.591 "assigned_rate_limits": { 00:03:22.591 "rw_ios_per_sec": 0, 00:03:22.591 "rw_mbytes_per_sec": 0, 00:03:22.591 "r_mbytes_per_sec": 0, 00:03:22.591 "w_mbytes_per_sec": 0 00:03:22.591 }, 00:03:22.591 "claimed": true, 00:03:22.591 "claim_type": "exclusive_write", 00:03:22.591 "zoned": false, 00:03:22.591 "supported_io_types": { 00:03:22.591 "read": true, 00:03:22.591 "write": true, 00:03:22.591 "unmap": true, 00:03:22.591 "flush": true, 00:03:22.591 "reset": true, 00:03:22.591 "nvme_admin": false, 00:03:22.591 "nvme_io": false, 00:03:22.591 "nvme_io_md": false, 00:03:22.591 "write_zeroes": true, 00:03:22.591 "zcopy": true, 00:03:22.591 "get_zone_info": false, 00:03:22.591 "zone_management": false, 00:03:22.591 "zone_append": false, 00:03:22.591 "compare": false, 00:03:22.591 "compare_and_write": false, 00:03:22.591 "abort": true, 00:03:22.591 "seek_hole": false, 00:03:22.591 "seek_data": false, 00:03:22.591 "copy": true, 00:03:22.591 "nvme_iov_md": false 00:03:22.591 }, 00:03:22.591 "memory_domains": [ 00:03:22.591 { 00:03:22.591 "dma_device_id": "system", 00:03:22.591 "dma_device_type": 1 00:03:22.591 }, 00:03:22.591 { 00:03:22.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.591 "dma_device_type": 2 00:03:22.591 } 00:03:22.591 ], 00:03:22.591 "driver_specific": {} 00:03:22.591 }, 00:03:22.591 { 00:03:22.591 "name": "Passthru0", 00:03:22.591 "aliases": [ 00:03:22.591 "7874f66d-2592-5254-90d5-834d65bb22f5" 00:03:22.591 ], 00:03:22.591 "product_name": "passthru", 00:03:22.591 "block_size": 512, 00:03:22.591 "num_blocks": 16384, 00:03:22.591 "uuid": "7874f66d-2592-5254-90d5-834d65bb22f5", 00:03:22.591 "assigned_rate_limits": { 00:03:22.591 "rw_ios_per_sec": 0, 00:03:22.591 "rw_mbytes_per_sec": 0, 00:03:22.591 "r_mbytes_per_sec": 0, 00:03:22.591 "w_mbytes_per_sec": 0 00:03:22.591 }, 00:03:22.591 "claimed": false, 00:03:22.591 "zoned": false, 00:03:22.591 "supported_io_types": { 00:03:22.591 "read": true, 00:03:22.591 "write": true, 00:03:22.591 "unmap": true, 00:03:22.591 "flush": true, 00:03:22.591 "reset": true, 00:03:22.591 "nvme_admin": false, 00:03:22.591 "nvme_io": false, 00:03:22.591 "nvme_io_md": false, 00:03:22.591 "write_zeroes": true, 00:03:22.591 "zcopy": true, 00:03:22.591 "get_zone_info": false, 00:03:22.591 "zone_management": false, 00:03:22.591 "zone_append": false, 00:03:22.591 "compare": false, 00:03:22.591 "compare_and_write": false, 00:03:22.591 "abort": true, 00:03:22.591 "seek_hole": false, 00:03:22.591 "seek_data": false, 00:03:22.591 "copy": true, 00:03:22.591 "nvme_iov_md": false 00:03:22.591 }, 00:03:22.591 "memory_domains": [ 00:03:22.591 { 00:03:22.591 "dma_device_id": "system", 00:03:22.591 "dma_device_type": 1 00:03:22.591 }, 00:03:22.591 { 00:03:22.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.591 "dma_device_type": 2 00:03:22.591 } 00:03:22.591 ], 00:03:22.591 "driver_specific": { 00:03:22.591 "passthru": { 00:03:22.591 "name": "Passthru0", 00:03:22.591 "base_bdev_name": "Malloc2" 00:03:22.591 } 00:03:22.591 } 00:03:22.591 } 00:03:22.591 ]' 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:22.591 00:03:22.591 real 0m0.300s 00:03:22.591 user 0m0.180s 00:03:22.591 sys 0m0.054s 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.591 13:53:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.591 ************************************ 00:03:22.591 END TEST rpc_daemon_integrity 00:03:22.591 ************************************ 00:03:22.591 13:53:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:22.591 13:53:28 rpc -- rpc/rpc.sh@84 -- # killprocess 2485018 00:03:22.591 13:53:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 2485018 ']' 00:03:22.591 13:53:28 rpc -- common/autotest_common.sh@958 -- # kill -0 2485018 00:03:22.591 13:53:28 rpc -- common/autotest_common.sh@959 -- # uname 00:03:22.591 13:53:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:22.591 13:53:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485018 00:03:22.850 13:53:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:22.851 13:53:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:22.851 13:53:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485018' 00:03:22.851 killing process with pid 2485018 00:03:22.851 13:53:28 rpc -- common/autotest_common.sh@973 -- # kill 2485018 00:03:22.851 13:53:28 rpc -- common/autotest_common.sh@978 -- # wait 2485018 00:03:23.110 00:03:23.110 real 0m2.710s 00:03:23.110 user 0m3.422s 00:03:23.110 sys 0m0.869s 00:03:23.110 13:53:29 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:23.110 13:53:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.110 ************************************ 00:03:23.110 END TEST rpc 00:03:23.110 ************************************ 00:03:23.110 13:53:29 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:23.110 13:53:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.110 13:53:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.110 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:03:23.110 ************************************ 00:03:23.110 START TEST skip_rpc 00:03:23.111 ************************************ 00:03:23.111 13:53:29 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:23.111 * Looking for test storage... 00:03:23.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.111 13:53:29 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:23.111 13:53:29 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:23.111 13:53:29 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:23.371 13:53:29 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.371 --rc genhtml_branch_coverage=1 00:03:23.371 --rc genhtml_function_coverage=1 00:03:23.371 --rc genhtml_legend=1 00:03:23.371 --rc geninfo_all_blocks=1 00:03:23.371 --rc geninfo_unexecuted_blocks=1 00:03:23.371 00:03:23.371 ' 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.371 --rc genhtml_branch_coverage=1 00:03:23.371 --rc genhtml_function_coverage=1 00:03:23.371 --rc genhtml_legend=1 00:03:23.371 --rc geninfo_all_blocks=1 00:03:23.371 --rc geninfo_unexecuted_blocks=1 00:03:23.371 00:03:23.371 ' 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.371 --rc genhtml_branch_coverage=1 00:03:23.371 --rc genhtml_function_coverage=1 00:03:23.371 --rc genhtml_legend=1 00:03:23.371 --rc geninfo_all_blocks=1 00:03:23.371 --rc geninfo_unexecuted_blocks=1 00:03:23.371 00:03:23.371 ' 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:23.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.371 --rc genhtml_branch_coverage=1 00:03:23.371 --rc genhtml_function_coverage=1 00:03:23.371 --rc genhtml_legend=1 00:03:23.371 --rc geninfo_all_blocks=1 00:03:23.371 --rc geninfo_unexecuted_blocks=1 00:03:23.371 00:03:23.371 ' 00:03:23.371 13:53:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:23.371 13:53:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:23.371 13:53:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.371 13:53:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.371 ************************************ 00:03:23.371 START TEST skip_rpc 00:03:23.371 ************************************ 00:03:23.371 13:53:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:23.371 13:53:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2485670 00:03:23.371 13:53:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:23.371 13:53:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:23.371 13:53:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:23.372 [2024-12-05 13:53:29.556845] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:23.372 [2024-12-05 13:53:29.556904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485670 ] 00:03:23.372 [2024-12-05 13:53:29.648616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.631 [2024-12-05 13:53:29.701571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2485670 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2485670 ']' 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2485670 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485670 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:28.930 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:28.931 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485670' 00:03:28.931 killing process with pid 2485670 00:03:28.931 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2485670 00:03:28.931 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2485670 00:03:28.931 00:03:28.931 real 0m5.266s 00:03:28.931 user 0m5.014s 00:03:28.931 sys 0m0.297s 00:03:28.931 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.931 13:53:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.931 ************************************ 00:03:28.931 END TEST skip_rpc 00:03:28.931 ************************************ 00:03:28.931 13:53:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:28.931 13:53:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.931 13:53:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.931 13:53:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.931 ************************************ 00:03:28.931 START TEST skip_rpc_with_json 00:03:28.931 ************************************ 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2486735 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2486735 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2486735 ']' 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:28.931 13:53:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:28.931 [2024-12-05 13:53:34.903145] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:28.931 [2024-12-05 13:53:34.903204] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486735 ] 00:03:28.931 [2024-12-05 13:53:34.987999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.931 [2024-12-05 13:53:35.021468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.499 [2024-12-05 13:53:35.692176] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:29.499 request: 00:03:29.499 { 00:03:29.499 "trtype": "tcp", 00:03:29.499 "method": "nvmf_get_transports", 00:03:29.499 "req_id": 1 00:03:29.499 } 00:03:29.499 Got JSON-RPC error response 00:03:29.499 response: 00:03:29.499 { 00:03:29.499 "code": -19, 00:03:29.499 "message": "No such device" 00:03:29.499 } 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.499 [2024-12-05 13:53:35.704278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.499 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.759 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.759 13:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.759 { 00:03:29.759 "subsystems": [ 00:03:29.759 { 00:03:29.759 "subsystem": "fsdev", 00:03:29.759 "config": [ 00:03:29.759 { 00:03:29.759 "method": "fsdev_set_opts", 00:03:29.759 "params": { 00:03:29.759 "fsdev_io_pool_size": 65535, 00:03:29.759 "fsdev_io_cache_size": 256 00:03:29.759 } 00:03:29.759 } 00:03:29.759 ] 00:03:29.759 }, 00:03:29.759 { 00:03:29.759 "subsystem": "vfio_user_target", 00:03:29.759 "config": null 00:03:29.759 }, 00:03:29.759 { 00:03:29.759 "subsystem": "keyring", 00:03:29.759 "config": [] 00:03:29.759 }, 00:03:29.760 { 00:03:29.760 "subsystem": "iobuf", 00:03:29.760 "config": [ 00:03:29.760 { 00:03:29.760 "method": "iobuf_set_options", 00:03:29.760 "params": { 00:03:29.760 "small_pool_count": 8192, 00:03:29.760 "large_pool_count": 1024, 00:03:29.760 "small_bufsize": 8192, 00:03:29.760 "large_bufsize": 135168, 00:03:29.760 "enable_numa": false 00:03:29.760 } 00:03:29.760 } 00:03:29.760 ] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "sock", 00:03:29.760 "config": [ 00:03:29.760 { 00:03:29.760 "method": "sock_set_default_impl", 00:03:29.760 "params": { 00:03:29.760 "impl_name": "posix" 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "sock_impl_set_options", 00:03:29.760 "params": { 00:03:29.760 "impl_name": "ssl", 00:03:29.760 "recv_buf_size": 4096, 00:03:29.760 "send_buf_size": 4096, 00:03:29.760 "enable_recv_pipe": true, 00:03:29.760 "enable_quickack": false, 00:03:29.760 "enable_placement_id": 0, 00:03:29.760 "enable_zerocopy_send_server": true, 00:03:29.760 "enable_zerocopy_send_client": false, 00:03:29.760 "zerocopy_threshold": 0, 00:03:29.760 "tls_version": 0, 00:03:29.760 "enable_ktls": false 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "sock_impl_set_options", 00:03:29.760 "params": { 00:03:29.760 "impl_name": "posix", 00:03:29.760 "recv_buf_size": 2097152, 00:03:29.760 "send_buf_size": 2097152, 00:03:29.760 "enable_recv_pipe": true, 00:03:29.760 "enable_quickack": false, 00:03:29.760 "enable_placement_id": 0, 00:03:29.760 "enable_zerocopy_send_server": true, 00:03:29.760 "enable_zerocopy_send_client": false, 00:03:29.760 "zerocopy_threshold": 0, 00:03:29.760 "tls_version": 0, 00:03:29.760 "enable_ktls": false 00:03:29.760 } 00:03:29.760 } 00:03:29.760 ] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "vmd", 00:03:29.760 "config": [] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "accel", 00:03:29.760 "config": [ 00:03:29.760 { 00:03:29.760 "method": "accel_set_options", 00:03:29.760 "params": { 00:03:29.760 "small_cache_size": 128, 00:03:29.760 "large_cache_size": 16, 00:03:29.760 "task_count": 2048, 00:03:29.760 "sequence_count": 2048, 00:03:29.760 "buf_count": 2048 00:03:29.760 } 00:03:29.760 } 00:03:29.760 ] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "bdev", 00:03:29.760 "config": [ 00:03:29.760 { 00:03:29.760 "method": "bdev_set_options", 00:03:29.760 "params": { 00:03:29.760 "bdev_io_pool_size": 65535, 00:03:29.760 "bdev_io_cache_size": 256, 00:03:29.760 "bdev_auto_examine": true, 00:03:29.760 "iobuf_small_cache_size": 128, 00:03:29.760 "iobuf_large_cache_size": 16 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "bdev_raid_set_options", 00:03:29.760 "params": { 00:03:29.760 "process_window_size_kb": 1024, 00:03:29.760 "process_max_bandwidth_mb_sec": 0 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "bdev_iscsi_set_options", 00:03:29.760 "params": { 00:03:29.760 "timeout_sec": 30 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "bdev_nvme_set_options", 00:03:29.760 "params": { 00:03:29.760 "action_on_timeout": "none", 00:03:29.760 "timeout_us": 0, 00:03:29.760 "timeout_admin_us": 0, 00:03:29.760 "keep_alive_timeout_ms": 10000, 00:03:29.760 "arbitration_burst": 0, 00:03:29.760 "low_priority_weight": 0, 00:03:29.760 "medium_priority_weight": 0, 00:03:29.760 "high_priority_weight": 0, 00:03:29.760 "nvme_adminq_poll_period_us": 10000, 00:03:29.760 "nvme_ioq_poll_period_us": 0, 00:03:29.760 "io_queue_requests": 0, 00:03:29.760 "delay_cmd_submit": true, 00:03:29.760 "transport_retry_count": 4, 00:03:29.760 "bdev_retry_count": 3, 00:03:29.760 "transport_ack_timeout": 0, 00:03:29.760 "ctrlr_loss_timeout_sec": 0, 00:03:29.760 "reconnect_delay_sec": 0, 00:03:29.760 "fast_io_fail_timeout_sec": 0, 00:03:29.760 "disable_auto_failback": false, 00:03:29.760 "generate_uuids": false, 00:03:29.760 "transport_tos": 0, 00:03:29.760 "nvme_error_stat": false, 00:03:29.760 "rdma_srq_size": 0, 00:03:29.760 "io_path_stat": false, 00:03:29.760 "allow_accel_sequence": false, 00:03:29.760 "rdma_max_cq_size": 0, 00:03:29.760 "rdma_cm_event_timeout_ms": 0, 00:03:29.760 "dhchap_digests": [ 00:03:29.760 "sha256", 00:03:29.760 "sha384", 00:03:29.760 "sha512" 00:03:29.760 ], 00:03:29.760 "dhchap_dhgroups": [ 00:03:29.760 "null", 00:03:29.760 "ffdhe2048", 00:03:29.760 "ffdhe3072", 00:03:29.760 "ffdhe4096", 00:03:29.760 "ffdhe6144", 00:03:29.760 "ffdhe8192" 00:03:29.760 ] 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "bdev_nvme_set_hotplug", 00:03:29.760 "params": { 00:03:29.760 "period_us": 100000, 00:03:29.760 "enable": false 00:03:29.760 } 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "method": "bdev_wait_for_examine" 00:03:29.760 } 00:03:29.760 ] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "scsi", 00:03:29.760 "config": null 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "scheduler", 00:03:29.760 "config": [ 00:03:29.760 { 00:03:29.760 "method": "framework_set_scheduler", 00:03:29.760 "params": { 00:03:29.760 "name": "static" 00:03:29.760 } 00:03:29.760 } 00:03:29.760 ] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "vhost_scsi", 00:03:29.760 "config": [] 00:03:29.760 }, 00:03:29.760 { 00:03:29.760 "subsystem": "vhost_blk", 00:03:29.760 "config": [] 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "subsystem": "ublk", 00:03:29.761 "config": [] 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "subsystem": "nbd", 00:03:29.761 "config": [] 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "subsystem": "nvmf", 00:03:29.761 "config": [ 00:03:29.761 { 00:03:29.761 "method": "nvmf_set_config", 00:03:29.761 "params": { 00:03:29.761 "discovery_filter": "match_any", 00:03:29.761 "admin_cmd_passthru": { 00:03:29.761 "identify_ctrlr": false 00:03:29.761 }, 00:03:29.761 "dhchap_digests": [ 00:03:29.761 "sha256", 00:03:29.761 "sha384", 00:03:29.761 "sha512" 00:03:29.761 ], 00:03:29.761 "dhchap_dhgroups": [ 00:03:29.761 "null", 00:03:29.761 "ffdhe2048", 00:03:29.761 "ffdhe3072", 00:03:29.761 "ffdhe4096", 00:03:29.761 "ffdhe6144", 00:03:29.761 "ffdhe8192" 00:03:29.761 ] 00:03:29.761 } 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "method": "nvmf_set_max_subsystems", 00:03:29.761 "params": { 00:03:29.761 "max_subsystems": 1024 00:03:29.761 } 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "method": "nvmf_set_crdt", 00:03:29.761 "params": { 00:03:29.761 "crdt1": 0, 00:03:29.761 "crdt2": 0, 00:03:29.761 "crdt3": 0 00:03:29.761 } 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "method": "nvmf_create_transport", 00:03:29.761 "params": { 00:03:29.761 "trtype": "TCP", 00:03:29.761 "max_queue_depth": 128, 00:03:29.761 "max_io_qpairs_per_ctrlr": 127, 00:03:29.761 "in_capsule_data_size": 4096, 00:03:29.761 "max_io_size": 131072, 00:03:29.761 "io_unit_size": 131072, 00:03:29.761 "max_aq_depth": 128, 00:03:29.761 "num_shared_buffers": 511, 00:03:29.761 "buf_cache_size": 4294967295, 00:03:29.761 "dif_insert_or_strip": false, 00:03:29.761 "zcopy": false, 00:03:29.761 "c2h_success": true, 00:03:29.761 "sock_priority": 0, 00:03:29.761 "abort_timeout_sec": 1, 00:03:29.761 "ack_timeout": 0, 00:03:29.761 "data_wr_pool_size": 0 00:03:29.761 } 00:03:29.761 } 00:03:29.761 ] 00:03:29.761 }, 00:03:29.761 { 00:03:29.761 "subsystem": "iscsi", 00:03:29.761 "config": [ 00:03:29.761 { 00:03:29.761 "method": "iscsi_set_options", 00:03:29.761 "params": { 00:03:29.761 "node_base": "iqn.2016-06.io.spdk", 00:03:29.761 "max_sessions": 128, 00:03:29.761 "max_connections_per_session": 2, 00:03:29.761 "max_queue_depth": 64, 00:03:29.761 "default_time2wait": 2, 00:03:29.761 "default_time2retain": 20, 00:03:29.761 "first_burst_length": 8192, 00:03:29.761 "immediate_data": true, 00:03:29.761 "allow_duplicated_isid": false, 00:03:29.761 "error_recovery_level": 0, 00:03:29.761 "nop_timeout": 60, 00:03:29.761 "nop_in_interval": 30, 00:03:29.761 "disable_chap": false, 00:03:29.761 "require_chap": false, 00:03:29.761 "mutual_chap": false, 00:03:29.761 "chap_group": 0, 00:03:29.761 "max_large_datain_per_connection": 64, 00:03:29.761 "max_r2t_per_connection": 4, 00:03:29.761 "pdu_pool_size": 36864, 00:03:29.761 "immediate_data_pool_size": 16384, 00:03:29.761 "data_out_pool_size": 2048 00:03:29.761 } 00:03:29.761 } 00:03:29.761 ] 00:03:29.761 } 00:03:29.761 ] 00:03:29.761 } 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2486735 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2486735 ']' 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2486735 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2486735 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2486735' 00:03:29.761 killing process with pid 2486735 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2486735 00:03:29.761 13:53:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2486735 00:03:30.021 13:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2487055 00:03:30.021 13:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:30.021 13:53:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2487055 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2487055 ']' 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2487055 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487055 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487055' 00:03:35.302 killing process with pid 2487055 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2487055 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2487055 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.302 00:03:35.302 real 0m6.547s 00:03:35.302 user 0m6.438s 00:03:35.302 sys 0m0.566s 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.302 ************************************ 00:03:35.302 END TEST skip_rpc_with_json 00:03:35.302 ************************************ 00:03:35.302 13:53:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:35.302 13:53:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.302 13:53:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.302 13:53:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.302 ************************************ 00:03:35.302 START TEST skip_rpc_with_delay 00:03:35.302 ************************************ 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.302 [2024-12-05 13:53:41.528400] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:35.302 00:03:35.302 real 0m0.077s 00:03:35.302 user 0m0.048s 00:03:35.302 sys 0m0.029s 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.302 13:53:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:35.302 ************************************ 00:03:35.302 END TEST skip_rpc_with_delay 00:03:35.302 ************************************ 00:03:35.302 13:53:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:35.302 13:53:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:35.302 13:53:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:35.302 13:53:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.302 13:53:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.302 13:53:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.562 ************************************ 00:03:35.562 START TEST exit_on_failed_rpc_init 00:03:35.562 ************************************ 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2488163 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2488163 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2488163 ']' 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.562 13:53:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:35.562 [2024-12-05 13:53:41.689819] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:35.562 [2024-12-05 13:53:41.689881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488163 ] 00:03:35.562 [2024-12-05 13:53:41.775008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.562 [2024-12-05 13:53:41.816281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:36.501 [2024-12-05 13:53:42.539913] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:36.501 [2024-12-05 13:53:42.539962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488450 ] 00:03:36.501 [2024-12-05 13:53:42.627239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.501 [2024-12-05 13:53:42.663032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:36.501 [2024-12-05 13:53:42.663080] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:36.501 [2024-12-05 13:53:42.663090] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:36.501 [2024-12-05 13:53:42.663097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2488163 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2488163 ']' 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2488163 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2488163 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2488163' 00:03:36.501 killing process with pid 2488163 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2488163 00:03:36.501 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2488163 00:03:36.766 00:03:36.766 real 0m1.326s 00:03:36.766 user 0m1.552s 00:03:36.766 sys 0m0.381s 00:03:36.766 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.766 13:53:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:36.766 ************************************ 00:03:36.766 END TEST exit_on_failed_rpc_init 00:03:36.766 ************************************ 00:03:36.766 13:53:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.766 00:03:36.766 real 0m13.743s 00:03:36.766 user 0m13.289s 00:03:36.766 sys 0m1.590s 00:03:36.766 13:53:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.766 13:53:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.766 ************************************ 00:03:36.766 END TEST skip_rpc 00:03:36.766 ************************************ 00:03:36.766 13:53:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:36.766 13:53:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.766 13:53:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.766 13:53:43 -- common/autotest_common.sh@10 -- # set +x 00:03:37.126 ************************************ 00:03:37.126 START TEST rpc_client 00:03:37.126 ************************************ 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:37.126 * Looking for test storage... 00:03:37.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.126 13:53:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.126 --rc genhtml_branch_coverage=1 00:03:37.126 --rc genhtml_function_coverage=1 00:03:37.126 --rc genhtml_legend=1 00:03:37.126 --rc geninfo_all_blocks=1 00:03:37.126 --rc geninfo_unexecuted_blocks=1 00:03:37.126 00:03:37.126 ' 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.126 --rc genhtml_branch_coverage=1 00:03:37.126 --rc genhtml_function_coverage=1 00:03:37.126 --rc genhtml_legend=1 00:03:37.126 --rc geninfo_all_blocks=1 00:03:37.126 --rc geninfo_unexecuted_blocks=1 00:03:37.126 00:03:37.126 ' 00:03:37.126 13:53:43 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.126 --rc genhtml_branch_coverage=1 00:03:37.126 --rc genhtml_function_coverage=1 00:03:37.127 --rc genhtml_legend=1 00:03:37.127 --rc geninfo_all_blocks=1 00:03:37.127 --rc geninfo_unexecuted_blocks=1 00:03:37.127 00:03:37.127 ' 00:03:37.127 13:53:43 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.127 --rc genhtml_branch_coverage=1 00:03:37.127 --rc genhtml_function_coverage=1 00:03:37.127 --rc genhtml_legend=1 00:03:37.127 --rc geninfo_all_blocks=1 00:03:37.127 --rc geninfo_unexecuted_blocks=1 00:03:37.127 00:03:37.127 ' 00:03:37.127 13:53:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:37.127 OK 00:03:37.127 13:53:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:37.127 00:03:37.127 real 0m0.227s 00:03:37.127 user 0m0.130s 00:03:37.127 sys 0m0.112s 00:03:37.127 13:53:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.127 13:53:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:37.127 ************************************ 00:03:37.127 END TEST rpc_client 00:03:37.127 ************************************ 00:03:37.127 13:53:43 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:37.127 13:53:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.127 13:53:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.127 13:53:43 -- common/autotest_common.sh@10 -- # set +x 00:03:37.127 ************************************ 00:03:37.127 START TEST json_config 00:03:37.127 ************************************ 00:03:37.127 13:53:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:37.432 13:53:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:37.432 13:53:43 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:37.432 13:53:43 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:37.432 13:53:43 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:37.432 13:53:43 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:37.432 13:53:43 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:37.432 13:53:43 json_config -- scripts/common.sh@345 -- # : 1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:37.432 13:53:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.432 13:53:43 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@353 -- # local d=1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:37.432 13:53:43 json_config -- scripts/common.sh@355 -- # echo 1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:37.432 13:53:43 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@353 -- # local d=2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:37.432 13:53:43 json_config -- scripts/common.sh@355 -- # echo 2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:37.432 13:53:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:37.432 13:53:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:37.432 13:53:43 json_config -- scripts/common.sh@368 -- # return 0 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:37.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.432 --rc genhtml_branch_coverage=1 00:03:37.432 --rc genhtml_function_coverage=1 00:03:37.432 --rc genhtml_legend=1 00:03:37.432 --rc geninfo_all_blocks=1 00:03:37.432 --rc geninfo_unexecuted_blocks=1 00:03:37.432 00:03:37.432 ' 00:03:37.432 13:53:43 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.433 --rc genhtml_branch_coverage=1 00:03:37.433 --rc genhtml_function_coverage=1 00:03:37.433 --rc genhtml_legend=1 00:03:37.433 --rc geninfo_all_blocks=1 00:03:37.433 --rc geninfo_unexecuted_blocks=1 00:03:37.433 00:03:37.433 ' 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.433 --rc genhtml_branch_coverage=1 00:03:37.433 --rc genhtml_function_coverage=1 00:03:37.433 --rc genhtml_legend=1 00:03:37.433 --rc geninfo_all_blocks=1 00:03:37.433 --rc geninfo_unexecuted_blocks=1 00:03:37.433 00:03:37.433 ' 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:37.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:37.433 --rc genhtml_branch_coverage=1 00:03:37.433 --rc genhtml_function_coverage=1 00:03:37.433 --rc genhtml_legend=1 00:03:37.433 --rc geninfo_all_blocks=1 00:03:37.433 --rc geninfo_unexecuted_blocks=1 00:03:37.433 00:03:37.433 ' 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:37.433 13:53:43 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:37.433 13:53:43 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:37.433 13:53:43 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:37.433 13:53:43 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:37.433 13:53:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.433 13:53:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.433 13:53:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.433 13:53:43 json_config -- paths/export.sh@5 -- # export PATH 00:03:37.433 13:53:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@51 -- # : 0 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:37.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:37.433 13:53:43 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:37.433 INFO: JSON configuration test init 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.433 13:53:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:37.433 13:53:43 json_config -- json_config/common.sh@9 -- # local app=target 00:03:37.433 13:53:43 json_config -- json_config/common.sh@10 -- # shift 00:03:37.433 13:53:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:37.433 13:53:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:37.433 13:53:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:37.433 13:53:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:37.433 13:53:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:37.433 13:53:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2488771 00:03:37.433 13:53:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:37.433 Waiting for target to run... 00:03:37.433 13:53:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2488771 /var/tmp/spdk_tgt.sock 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 2488771 ']' 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:37.433 13:53:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:37.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:37.433 13:53:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.433 [2024-12-05 13:53:43.675894] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:37.433 [2024-12-05 13:53:43.675967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488771 ] 00:03:37.695 [2024-12-05 13:53:43.975120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.954 [2024-12-05 13:53:44.003159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.214 13:53:44 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:38.214 13:53:44 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:38.214 13:53:44 json_config -- json_config/common.sh@26 -- # echo '' 00:03:38.214 00:03:38.214 13:53:44 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:38.214 13:53:44 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:38.214 13:53:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.214 13:53:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.214 13:53:44 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:38.214 13:53:44 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:38.214 13:53:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.214 13:53:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:38.473 13:53:44 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:38.473 13:53:44 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:38.473 13:53:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:39.044 13:53:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.044 13:53:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:39.044 13:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@54 -- # sort 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:39.044 13:53:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:39.044 13:53:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:39.044 13:53:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:39.044 13:53:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:39.044 13:53:45 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:39.044 13:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:39.303 MallocForNvmf0 00:03:39.303 13:53:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:39.303 13:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:39.563 MallocForNvmf1 00:03:39.563 13:53:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:39.563 13:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:39.563 [2024-12-05 13:53:45.818752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.563 13:53:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:39.563 13:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:39.823 13:53:46 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:39.823 13:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:40.082 13:53:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:40.082 13:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:40.341 13:53:46 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:40.341 13:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:40.342 [2024-12-05 13:53:46.536903] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:40.342 13:53:46 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:40.342 13:53:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.342 13:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.342 13:53:46 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:40.342 13:53:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.342 13:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.601 13:53:46 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:40.601 13:53:46 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:40.601 13:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:40.601 MallocBdevForConfigChangeCheck 00:03:40.601 13:53:46 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:40.601 13:53:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:40.601 13:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.601 13:53:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:40.601 13:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:41.172 13:53:47 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:41.172 INFO: shutting down applications... 00:03:41.172 13:53:47 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:41.172 13:53:47 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:41.172 13:53:47 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:41.172 13:53:47 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:41.434 Calling clear_iscsi_subsystem 00:03:41.434 Calling clear_nvmf_subsystem 00:03:41.434 Calling clear_nbd_subsystem 00:03:41.434 Calling clear_ublk_subsystem 00:03:41.434 Calling clear_vhost_blk_subsystem 00:03:41.434 Calling clear_vhost_scsi_subsystem 00:03:41.434 Calling clear_bdev_subsystem 00:03:41.434 13:53:47 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:41.434 13:53:47 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:41.434 13:53:47 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:41.434 13:53:47 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:41.434 13:53:47 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:41.434 13:53:47 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:42.005 13:53:48 json_config -- json_config/json_config.sh@352 -- # break 00:03:42.005 13:53:48 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:42.005 13:53:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:42.005 13:53:48 json_config -- json_config/common.sh@31 -- # local app=target 00:03:42.005 13:53:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:42.005 13:53:48 json_config -- json_config/common.sh@35 -- # [[ -n 2488771 ]] 00:03:42.005 13:53:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2488771 00:03:42.005 13:53:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:42.005 13:53:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:42.005 13:53:48 json_config -- json_config/common.sh@41 -- # kill -0 2488771 00:03:42.005 13:53:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:42.265 13:53:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:42.266 13:53:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:42.266 13:53:48 json_config -- json_config/common.sh@41 -- # kill -0 2488771 00:03:42.266 13:53:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:42.266 13:53:48 json_config -- json_config/common.sh@43 -- # break 00:03:42.266 13:53:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:42.266 13:53:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:42.266 SPDK target shutdown done 00:03:42.266 13:53:48 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:42.266 INFO: relaunching applications... 00:03:42.266 13:53:48 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:42.266 13:53:48 json_config -- json_config/common.sh@9 -- # local app=target 00:03:42.266 13:53:48 json_config -- json_config/common.sh@10 -- # shift 00:03:42.266 13:53:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:42.266 13:53:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:42.266 13:53:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:42.266 13:53:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.266 13:53:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:42.266 13:53:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2489849 00:03:42.266 13:53:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:42.266 Waiting for target to run... 00:03:42.266 13:53:48 json_config -- json_config/common.sh@25 -- # waitforlisten 2489849 /var/tmp/spdk_tgt.sock 00:03:42.266 13:53:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:42.266 13:53:48 json_config -- common/autotest_common.sh@835 -- # '[' -z 2489849 ']' 00:03:42.266 13:53:48 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.266 13:53:48 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.266 13:53:48 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.266 13:53:48 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.266 13:53:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.541 [2024-12-05 13:53:48.585326] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:42.541 [2024-12-05 13:53:48.585386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2489849 ] 00:03:42.801 [2024-12-05 13:53:48.847500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.801 [2024-12-05 13:53:48.871389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.372 [2024-12-05 13:53:49.372921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:43.372 [2024-12-05 13:53:49.405274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:43.372 13:53:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:43.372 13:53:49 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:43.372 13:53:49 json_config -- json_config/common.sh@26 -- # echo '' 00:03:43.372 00:03:43.372 13:53:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:43.372 13:53:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:43.372 INFO: Checking if target configuration is the same... 00:03:43.372 13:53:49 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:43.372 13:53:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:43.372 13:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.372 + '[' 2 -ne 2 ']' 00:03:43.372 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:43.372 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:43.372 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.373 +++ basename /dev/fd/62 00:03:43.373 ++ mktemp /tmp/62.XXX 00:03:43.373 + tmp_file_1=/tmp/62.vyj 00:03:43.373 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:43.373 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:43.373 + tmp_file_2=/tmp/spdk_tgt_config.json.Dla 00:03:43.373 + ret=0 00:03:43.373 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:43.632 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:43.632 + diff -u /tmp/62.vyj /tmp/spdk_tgt_config.json.Dla 00:03:43.632 + echo 'INFO: JSON config files are the same' 00:03:43.632 INFO: JSON config files are the same 00:03:43.632 + rm /tmp/62.vyj /tmp/spdk_tgt_config.json.Dla 00:03:43.632 + exit 0 00:03:43.632 13:53:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:43.632 13:53:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:43.632 INFO: changing configuration and checking if this can be detected... 00:03:43.632 13:53:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:43.633 13:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:43.892 13:53:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:43.892 13:53:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:43.892 13:53:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.892 + '[' 2 -ne 2 ']' 00:03:43.892 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:43.892 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:43.892 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.892 +++ basename /dev/fd/62 00:03:43.892 ++ mktemp /tmp/62.XXX 00:03:43.892 + tmp_file_1=/tmp/62.Ign 00:03:43.892 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:43.892 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:43.892 + tmp_file_2=/tmp/spdk_tgt_config.json.XEg 00:03:43.892 + ret=0 00:03:43.892 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:44.153 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:44.153 + diff -u /tmp/62.Ign /tmp/spdk_tgt_config.json.XEg 00:03:44.153 + ret=1 00:03:44.153 + echo '=== Start of file: /tmp/62.Ign ===' 00:03:44.153 + cat /tmp/62.Ign 00:03:44.153 + echo '=== End of file: /tmp/62.Ign ===' 00:03:44.153 + echo '' 00:03:44.153 + echo '=== Start of file: /tmp/spdk_tgt_config.json.XEg ===' 00:03:44.153 + cat /tmp/spdk_tgt_config.json.XEg 00:03:44.153 + echo '=== End of file: /tmp/spdk_tgt_config.json.XEg ===' 00:03:44.153 + echo '' 00:03:44.153 + rm /tmp/62.Ign /tmp/spdk_tgt_config.json.XEg 00:03:44.153 + exit 1 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:44.154 INFO: configuration change detected. 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 2489849 ]] 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.154 13:53:50 json_config -- json_config/json_config.sh@330 -- # killprocess 2489849 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@954 -- # '[' -z 2489849 ']' 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@958 -- # kill -0 2489849 00:03:44.154 13:53:50 json_config -- common/autotest_common.sh@959 -- # uname 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2489849 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2489849' 00:03:44.414 killing process with pid 2489849 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@973 -- # kill 2489849 00:03:44.414 13:53:50 json_config -- common/autotest_common.sh@978 -- # wait 2489849 00:03:44.675 13:53:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:44.675 13:53:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:44.675 13:53:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:44.675 13:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.675 13:53:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:44.675 13:53:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:44.675 INFO: Success 00:03:44.675 00:03:44.675 real 0m7.437s 00:03:44.675 user 0m9.097s 00:03:44.675 sys 0m1.937s 00:03:44.675 13:53:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.675 13:53:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.675 ************************************ 00:03:44.675 END TEST json_config 00:03:44.675 ************************************ 00:03:44.675 13:53:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:44.675 13:53:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.675 13:53:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.675 13:53:50 -- common/autotest_common.sh@10 -- # set +x 00:03:44.675 ************************************ 00:03:44.675 START TEST json_config_extra_key 00:03:44.675 ************************************ 00:03:44.675 13:53:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:44.675 13:53:50 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:44.675 13:53:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:03:44.675 13:53:50 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:44.936 13:53:51 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.936 13:53:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:44.936 13:53:51 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.936 13:53:51 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.936 --rc genhtml_branch_coverage=1 00:03:44.936 --rc genhtml_function_coverage=1 00:03:44.936 --rc genhtml_legend=1 00:03:44.936 --rc geninfo_all_blocks=1 00:03:44.936 --rc geninfo_unexecuted_blocks=1 00:03:44.936 00:03:44.936 ' 00:03:44.936 13:53:51 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.936 --rc genhtml_branch_coverage=1 00:03:44.936 --rc genhtml_function_coverage=1 00:03:44.936 --rc genhtml_legend=1 00:03:44.936 --rc geninfo_all_blocks=1 00:03:44.936 --rc geninfo_unexecuted_blocks=1 00:03:44.936 00:03:44.936 ' 00:03:44.936 13:53:51 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:44.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.936 --rc genhtml_branch_coverage=1 00:03:44.936 --rc genhtml_function_coverage=1 00:03:44.936 --rc genhtml_legend=1 00:03:44.936 --rc geninfo_all_blocks=1 00:03:44.936 --rc geninfo_unexecuted_blocks=1 00:03:44.936 00:03:44.936 ' 00:03:44.936 13:53:51 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:44.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.937 --rc genhtml_branch_coverage=1 00:03:44.937 --rc genhtml_function_coverage=1 00:03:44.937 --rc genhtml_legend=1 00:03:44.937 --rc geninfo_all_blocks=1 00:03:44.937 --rc geninfo_unexecuted_blocks=1 00:03:44.937 00:03:44.937 ' 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.937 13:53:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.937 13:53:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.937 13:53:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.937 13:53:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.937 13:53:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.937 13:53:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.937 13:53:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.937 13:53:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:44.937 13:53:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.937 13:53:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:44.937 INFO: launching applications... 00:03:44.937 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2490517 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.937 Waiting for target to run... 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2490517 /var/tmp/spdk_tgt.sock 00:03:44.937 13:53:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2490517 ']' 00:03:44.937 13:53:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.937 13:53:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:44.937 13:53:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:44.937 13:53:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.937 13:53:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:44.937 13:53:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:44.937 [2024-12-05 13:53:51.160108] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:44.937 [2024-12-05 13:53:51.160178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490517 ] 00:03:45.197 [2024-12-05 13:53:51.451158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.197 [2024-12-05 13:53:51.477443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.767 13:53:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:45.767 13:53:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:45.767 00:03:45.767 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:45.767 INFO: shutting down applications... 00:03:45.767 13:53:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2490517 ]] 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2490517 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2490517 00:03:45.767 13:53:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2490517 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:46.336 13:53:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:46.336 SPDK target shutdown done 00:03:46.336 13:53:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:46.336 Success 00:03:46.336 00:03:46.336 real 0m1.564s 00:03:46.336 user 0m1.184s 00:03:46.336 sys 0m0.401s 00:03:46.336 13:53:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:46.336 13:53:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:46.336 ************************************ 00:03:46.336 END TEST json_config_extra_key 00:03:46.336 ************************************ 00:03:46.336 13:53:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:46.336 13:53:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:46.336 13:53:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:46.336 13:53:52 -- common/autotest_common.sh@10 -- # set +x 00:03:46.336 ************************************ 00:03:46.336 START TEST alias_rpc 00:03:46.336 ************************************ 00:03:46.336 13:53:52 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:46.336 * Looking for test storage... 00:03:46.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:46.596 13:53:52 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:46.596 13:53:52 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:46.596 13:53:52 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:46.596 13:53:52 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:46.596 13:53:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:46.597 13:53:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:46.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.597 --rc genhtml_branch_coverage=1 00:03:46.597 --rc genhtml_function_coverage=1 00:03:46.597 --rc genhtml_legend=1 00:03:46.597 --rc geninfo_all_blocks=1 00:03:46.597 --rc geninfo_unexecuted_blocks=1 00:03:46.597 00:03:46.597 ' 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:46.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.597 --rc genhtml_branch_coverage=1 00:03:46.597 --rc genhtml_function_coverage=1 00:03:46.597 --rc genhtml_legend=1 00:03:46.597 --rc geninfo_all_blocks=1 00:03:46.597 --rc geninfo_unexecuted_blocks=1 00:03:46.597 00:03:46.597 ' 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:46.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.597 --rc genhtml_branch_coverage=1 00:03:46.597 --rc genhtml_function_coverage=1 00:03:46.597 --rc genhtml_legend=1 00:03:46.597 --rc geninfo_all_blocks=1 00:03:46.597 --rc geninfo_unexecuted_blocks=1 00:03:46.597 00:03:46.597 ' 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:46.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:46.597 --rc genhtml_branch_coverage=1 00:03:46.597 --rc genhtml_function_coverage=1 00:03:46.597 --rc genhtml_legend=1 00:03:46.597 --rc geninfo_all_blocks=1 00:03:46.597 --rc geninfo_unexecuted_blocks=1 00:03:46.597 00:03:46.597 ' 00:03:46.597 13:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:46.597 13:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2490918 00:03:46.597 13:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2490918 00:03:46.597 13:53:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2490918 ']' 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:46.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:46.597 13:53:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:46.597 [2024-12-05 13:53:52.797691] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:46.597 [2024-12-05 13:53:52.797744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2490918 ] 00:03:46.597 [2024-12-05 13:53:52.879141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.856 [2024-12-05 13:53:52.910175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.427 13:53:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.427 13:53:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:47.427 13:53:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:47.687 13:53:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2490918 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2490918 ']' 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2490918 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2490918 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2490918' 00:03:47.687 killing process with pid 2490918 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 2490918 00:03:47.687 13:53:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 2490918 00:03:47.956 00:03:47.956 real 0m1.503s 00:03:47.956 user 0m1.680s 00:03:47.956 sys 0m0.394s 00:03:47.956 13:53:54 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:47.956 13:53:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.956 ************************************ 00:03:47.956 END TEST alias_rpc 00:03:47.956 ************************************ 00:03:47.956 13:53:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:47.956 13:53:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:47.956 13:53:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:47.956 13:53:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:47.956 13:53:54 -- common/autotest_common.sh@10 -- # set +x 00:03:47.956 ************************************ 00:03:47.956 START TEST spdkcli_tcp 00:03:47.956 ************************************ 00:03:47.956 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:47.956 * Looking for test storage... 00:03:47.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:47.957 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:47.957 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:03:47.957 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:48.217 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.217 13:53:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:48.217 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.217 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:48.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.217 --rc genhtml_branch_coverage=1 00:03:48.217 --rc genhtml_function_coverage=1 00:03:48.217 --rc genhtml_legend=1 00:03:48.217 --rc geninfo_all_blocks=1 00:03:48.217 --rc geninfo_unexecuted_blocks=1 00:03:48.217 00:03:48.217 ' 00:03:48.217 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:48.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.217 --rc genhtml_branch_coverage=1 00:03:48.217 --rc genhtml_function_coverage=1 00:03:48.217 --rc genhtml_legend=1 00:03:48.217 --rc geninfo_all_blocks=1 00:03:48.217 --rc geninfo_unexecuted_blocks=1 00:03:48.217 00:03:48.217 ' 00:03:48.217 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:48.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.217 --rc genhtml_branch_coverage=1 00:03:48.217 --rc genhtml_function_coverage=1 00:03:48.217 --rc genhtml_legend=1 00:03:48.217 --rc geninfo_all_blocks=1 00:03:48.217 --rc geninfo_unexecuted_blocks=1 00:03:48.218 00:03:48.218 ' 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:48.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.218 --rc genhtml_branch_coverage=1 00:03:48.218 --rc genhtml_function_coverage=1 00:03:48.218 --rc genhtml_legend=1 00:03:48.218 --rc geninfo_all_blocks=1 00:03:48.218 --rc geninfo_unexecuted_blocks=1 00:03:48.218 00:03:48.218 ' 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2491314 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2491314 00:03:48.218 13:53:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2491314 ']' 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.218 13:53:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:48.218 [2024-12-05 13:53:54.376569] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:48.218 [2024-12-05 13:53:54.376641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491314 ] 00:03:48.218 [2024-12-05 13:53:54.464090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:48.218 [2024-12-05 13:53:54.499722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:48.218 [2024-12-05 13:53:54.499812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.162 13:53:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.162 13:53:55 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:49.162 13:53:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2491371 00:03:49.162 13:53:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:49.162 13:53:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:49.162 [ 00:03:49.162 "bdev_malloc_delete", 00:03:49.162 "bdev_malloc_create", 00:03:49.162 "bdev_null_resize", 00:03:49.162 "bdev_null_delete", 00:03:49.162 "bdev_null_create", 00:03:49.162 "bdev_nvme_cuse_unregister", 00:03:49.162 "bdev_nvme_cuse_register", 00:03:49.162 "bdev_opal_new_user", 00:03:49.162 "bdev_opal_set_lock_state", 00:03:49.162 "bdev_opal_delete", 00:03:49.162 "bdev_opal_get_info", 00:03:49.162 "bdev_opal_create", 00:03:49.162 "bdev_nvme_opal_revert", 00:03:49.162 "bdev_nvme_opal_init", 00:03:49.162 "bdev_nvme_send_cmd", 00:03:49.162 "bdev_nvme_set_keys", 00:03:49.162 "bdev_nvme_get_path_iostat", 00:03:49.162 "bdev_nvme_get_mdns_discovery_info", 00:03:49.162 "bdev_nvme_stop_mdns_discovery", 00:03:49.162 "bdev_nvme_start_mdns_discovery", 00:03:49.162 "bdev_nvme_set_multipath_policy", 00:03:49.162 "bdev_nvme_set_preferred_path", 00:03:49.162 "bdev_nvme_get_io_paths", 00:03:49.162 "bdev_nvme_remove_error_injection", 00:03:49.162 "bdev_nvme_add_error_injection", 00:03:49.162 "bdev_nvme_get_discovery_info", 00:03:49.162 "bdev_nvme_stop_discovery", 00:03:49.162 "bdev_nvme_start_discovery", 00:03:49.162 "bdev_nvme_get_controller_health_info", 00:03:49.162 "bdev_nvme_disable_controller", 00:03:49.162 "bdev_nvme_enable_controller", 00:03:49.162 "bdev_nvme_reset_controller", 00:03:49.162 "bdev_nvme_get_transport_statistics", 00:03:49.162 "bdev_nvme_apply_firmware", 00:03:49.162 "bdev_nvme_detach_controller", 00:03:49.162 "bdev_nvme_get_controllers", 00:03:49.162 "bdev_nvme_attach_controller", 00:03:49.162 "bdev_nvme_set_hotplug", 00:03:49.162 "bdev_nvme_set_options", 00:03:49.162 "bdev_passthru_delete", 00:03:49.162 "bdev_passthru_create", 00:03:49.162 "bdev_lvol_set_parent_bdev", 00:03:49.162 "bdev_lvol_set_parent", 00:03:49.162 "bdev_lvol_check_shallow_copy", 00:03:49.162 "bdev_lvol_start_shallow_copy", 00:03:49.162 "bdev_lvol_grow_lvstore", 00:03:49.162 "bdev_lvol_get_lvols", 00:03:49.162 "bdev_lvol_get_lvstores", 00:03:49.162 "bdev_lvol_delete", 00:03:49.162 "bdev_lvol_set_read_only", 00:03:49.162 "bdev_lvol_resize", 00:03:49.162 "bdev_lvol_decouple_parent", 00:03:49.162 "bdev_lvol_inflate", 00:03:49.162 "bdev_lvol_rename", 00:03:49.162 "bdev_lvol_clone_bdev", 00:03:49.162 "bdev_lvol_clone", 00:03:49.162 "bdev_lvol_snapshot", 00:03:49.162 "bdev_lvol_create", 00:03:49.162 "bdev_lvol_delete_lvstore", 00:03:49.162 "bdev_lvol_rename_lvstore", 00:03:49.162 "bdev_lvol_create_lvstore", 00:03:49.162 "bdev_raid_set_options", 00:03:49.162 "bdev_raid_remove_base_bdev", 00:03:49.162 "bdev_raid_add_base_bdev", 00:03:49.162 "bdev_raid_delete", 00:03:49.162 "bdev_raid_create", 00:03:49.162 "bdev_raid_get_bdevs", 00:03:49.162 "bdev_error_inject_error", 00:03:49.162 "bdev_error_delete", 00:03:49.162 "bdev_error_create", 00:03:49.162 "bdev_split_delete", 00:03:49.162 "bdev_split_create", 00:03:49.162 "bdev_delay_delete", 00:03:49.162 "bdev_delay_create", 00:03:49.162 "bdev_delay_update_latency", 00:03:49.162 "bdev_zone_block_delete", 00:03:49.162 "bdev_zone_block_create", 00:03:49.162 "blobfs_create", 00:03:49.162 "blobfs_detect", 00:03:49.162 "blobfs_set_cache_size", 00:03:49.162 "bdev_aio_delete", 00:03:49.162 "bdev_aio_rescan", 00:03:49.162 "bdev_aio_create", 00:03:49.162 "bdev_ftl_set_property", 00:03:49.162 "bdev_ftl_get_properties", 00:03:49.162 "bdev_ftl_get_stats", 00:03:49.162 "bdev_ftl_unmap", 00:03:49.162 "bdev_ftl_unload", 00:03:49.162 "bdev_ftl_delete", 00:03:49.162 "bdev_ftl_load", 00:03:49.162 "bdev_ftl_create", 00:03:49.162 "bdev_virtio_attach_controller", 00:03:49.162 "bdev_virtio_scsi_get_devices", 00:03:49.162 "bdev_virtio_detach_controller", 00:03:49.162 "bdev_virtio_blk_set_hotplug", 00:03:49.162 "bdev_iscsi_delete", 00:03:49.162 "bdev_iscsi_create", 00:03:49.162 "bdev_iscsi_set_options", 00:03:49.162 "accel_error_inject_error", 00:03:49.162 "ioat_scan_accel_module", 00:03:49.162 "dsa_scan_accel_module", 00:03:49.162 "iaa_scan_accel_module", 00:03:49.162 "vfu_virtio_create_fs_endpoint", 00:03:49.162 "vfu_virtio_create_scsi_endpoint", 00:03:49.162 "vfu_virtio_scsi_remove_target", 00:03:49.162 "vfu_virtio_scsi_add_target", 00:03:49.162 "vfu_virtio_create_blk_endpoint", 00:03:49.162 "vfu_virtio_delete_endpoint", 00:03:49.162 "keyring_file_remove_key", 00:03:49.162 "keyring_file_add_key", 00:03:49.162 "keyring_linux_set_options", 00:03:49.162 "fsdev_aio_delete", 00:03:49.162 "fsdev_aio_create", 00:03:49.162 "iscsi_get_histogram", 00:03:49.162 "iscsi_enable_histogram", 00:03:49.162 "iscsi_set_options", 00:03:49.162 "iscsi_get_auth_groups", 00:03:49.162 "iscsi_auth_group_remove_secret", 00:03:49.162 "iscsi_auth_group_add_secret", 00:03:49.162 "iscsi_delete_auth_group", 00:03:49.162 "iscsi_create_auth_group", 00:03:49.162 "iscsi_set_discovery_auth", 00:03:49.162 "iscsi_get_options", 00:03:49.162 "iscsi_target_node_request_logout", 00:03:49.162 "iscsi_target_node_set_redirect", 00:03:49.162 "iscsi_target_node_set_auth", 00:03:49.162 "iscsi_target_node_add_lun", 00:03:49.162 "iscsi_get_stats", 00:03:49.162 "iscsi_get_connections", 00:03:49.162 "iscsi_portal_group_set_auth", 00:03:49.162 "iscsi_start_portal_group", 00:03:49.162 "iscsi_delete_portal_group", 00:03:49.162 "iscsi_create_portal_group", 00:03:49.162 "iscsi_get_portal_groups", 00:03:49.162 "iscsi_delete_target_node", 00:03:49.162 "iscsi_target_node_remove_pg_ig_maps", 00:03:49.162 "iscsi_target_node_add_pg_ig_maps", 00:03:49.162 "iscsi_create_target_node", 00:03:49.162 "iscsi_get_target_nodes", 00:03:49.162 "iscsi_delete_initiator_group", 00:03:49.162 "iscsi_initiator_group_remove_initiators", 00:03:49.162 "iscsi_initiator_group_add_initiators", 00:03:49.162 "iscsi_create_initiator_group", 00:03:49.162 "iscsi_get_initiator_groups", 00:03:49.162 "nvmf_set_crdt", 00:03:49.162 "nvmf_set_config", 00:03:49.162 "nvmf_set_max_subsystems", 00:03:49.162 "nvmf_stop_mdns_prr", 00:03:49.162 "nvmf_publish_mdns_prr", 00:03:49.162 "nvmf_subsystem_get_listeners", 00:03:49.162 "nvmf_subsystem_get_qpairs", 00:03:49.162 "nvmf_subsystem_get_controllers", 00:03:49.162 "nvmf_get_stats", 00:03:49.162 "nvmf_get_transports", 00:03:49.162 "nvmf_create_transport", 00:03:49.162 "nvmf_get_targets", 00:03:49.162 "nvmf_delete_target", 00:03:49.162 "nvmf_create_target", 00:03:49.162 "nvmf_subsystem_allow_any_host", 00:03:49.162 "nvmf_subsystem_set_keys", 00:03:49.162 "nvmf_subsystem_remove_host", 00:03:49.162 "nvmf_subsystem_add_host", 00:03:49.162 "nvmf_ns_remove_host", 00:03:49.162 "nvmf_ns_add_host", 00:03:49.162 "nvmf_subsystem_remove_ns", 00:03:49.162 "nvmf_subsystem_set_ns_ana_group", 00:03:49.162 "nvmf_subsystem_add_ns", 00:03:49.162 "nvmf_subsystem_listener_set_ana_state", 00:03:49.162 "nvmf_discovery_get_referrals", 00:03:49.162 "nvmf_discovery_remove_referral", 00:03:49.162 "nvmf_discovery_add_referral", 00:03:49.162 "nvmf_subsystem_remove_listener", 00:03:49.162 "nvmf_subsystem_add_listener", 00:03:49.162 "nvmf_delete_subsystem", 00:03:49.162 "nvmf_create_subsystem", 00:03:49.162 "nvmf_get_subsystems", 00:03:49.162 "env_dpdk_get_mem_stats", 00:03:49.162 "nbd_get_disks", 00:03:49.162 "nbd_stop_disk", 00:03:49.162 "nbd_start_disk", 00:03:49.162 "ublk_recover_disk", 00:03:49.162 "ublk_get_disks", 00:03:49.162 "ublk_stop_disk", 00:03:49.162 "ublk_start_disk", 00:03:49.162 "ublk_destroy_target", 00:03:49.162 "ublk_create_target", 00:03:49.162 "virtio_blk_create_transport", 00:03:49.162 "virtio_blk_get_transports", 00:03:49.162 "vhost_controller_set_coalescing", 00:03:49.162 "vhost_get_controllers", 00:03:49.162 "vhost_delete_controller", 00:03:49.162 "vhost_create_blk_controller", 00:03:49.162 "vhost_scsi_controller_remove_target", 00:03:49.162 "vhost_scsi_controller_add_target", 00:03:49.162 "vhost_start_scsi_controller", 00:03:49.162 "vhost_create_scsi_controller", 00:03:49.162 "thread_set_cpumask", 00:03:49.162 "scheduler_set_options", 00:03:49.162 "framework_get_governor", 00:03:49.162 "framework_get_scheduler", 00:03:49.162 "framework_set_scheduler", 00:03:49.162 "framework_get_reactors", 00:03:49.163 "thread_get_io_channels", 00:03:49.163 "thread_get_pollers", 00:03:49.163 "thread_get_stats", 00:03:49.163 "framework_monitor_context_switch", 00:03:49.163 "spdk_kill_instance", 00:03:49.163 "log_enable_timestamps", 00:03:49.163 "log_get_flags", 00:03:49.163 "log_clear_flag", 00:03:49.163 "log_set_flag", 00:03:49.163 "log_get_level", 00:03:49.163 "log_set_level", 00:03:49.163 "log_get_print_level", 00:03:49.163 "log_set_print_level", 00:03:49.163 "framework_enable_cpumask_locks", 00:03:49.163 "framework_disable_cpumask_locks", 00:03:49.163 "framework_wait_init", 00:03:49.163 "framework_start_init", 00:03:49.163 "scsi_get_devices", 00:03:49.163 "bdev_get_histogram", 00:03:49.163 "bdev_enable_histogram", 00:03:49.163 "bdev_set_qos_limit", 00:03:49.163 "bdev_set_qd_sampling_period", 00:03:49.163 "bdev_get_bdevs", 00:03:49.163 "bdev_reset_iostat", 00:03:49.163 "bdev_get_iostat", 00:03:49.163 "bdev_examine", 00:03:49.163 "bdev_wait_for_examine", 00:03:49.163 "bdev_set_options", 00:03:49.163 "accel_get_stats", 00:03:49.163 "accel_set_options", 00:03:49.163 "accel_set_driver", 00:03:49.163 "accel_crypto_key_destroy", 00:03:49.163 "accel_crypto_keys_get", 00:03:49.163 "accel_crypto_key_create", 00:03:49.163 "accel_assign_opc", 00:03:49.163 "accel_get_module_info", 00:03:49.163 "accel_get_opc_assignments", 00:03:49.163 "vmd_rescan", 00:03:49.163 "vmd_remove_device", 00:03:49.163 "vmd_enable", 00:03:49.163 "sock_get_default_impl", 00:03:49.163 "sock_set_default_impl", 00:03:49.163 "sock_impl_set_options", 00:03:49.163 "sock_impl_get_options", 00:03:49.163 "iobuf_get_stats", 00:03:49.163 "iobuf_set_options", 00:03:49.163 "keyring_get_keys", 00:03:49.163 "vfu_tgt_set_base_path", 00:03:49.163 "framework_get_pci_devices", 00:03:49.163 "framework_get_config", 00:03:49.163 "framework_get_subsystems", 00:03:49.163 "fsdev_set_opts", 00:03:49.163 "fsdev_get_opts", 00:03:49.163 "trace_get_info", 00:03:49.163 "trace_get_tpoint_group_mask", 00:03:49.163 "trace_disable_tpoint_group", 00:03:49.163 "trace_enable_tpoint_group", 00:03:49.163 "trace_clear_tpoint_mask", 00:03:49.163 "trace_set_tpoint_mask", 00:03:49.163 "notify_get_notifications", 00:03:49.163 "notify_get_types", 00:03:49.163 "spdk_get_version", 00:03:49.163 "rpc_get_methods" 00:03:49.163 ] 00:03:49.163 13:53:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:49.163 13:53:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:49.163 13:53:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2491314 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2491314 ']' 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2491314 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2491314 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2491314' 00:03:49.163 killing process with pid 2491314 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2491314 00:03:49.163 13:53:55 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2491314 00:03:49.423 00:03:49.423 real 0m1.524s 00:03:49.423 user 0m2.821s 00:03:49.423 sys 0m0.436s 00:03:49.423 13:53:55 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.423 13:53:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:49.423 ************************************ 00:03:49.423 END TEST spdkcli_tcp 00:03:49.423 ************************************ 00:03:49.423 13:53:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:49.423 13:53:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.423 13:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.423 13:53:55 -- common/autotest_common.sh@10 -- # set +x 00:03:49.423 ************************************ 00:03:49.423 START TEST dpdk_mem_utility 00:03:49.423 ************************************ 00:03:49.423 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:49.685 * Looking for test storage... 00:03:49.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.685 13:53:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.685 --rc genhtml_branch_coverage=1 00:03:49.685 --rc genhtml_function_coverage=1 00:03:49.685 --rc genhtml_legend=1 00:03:49.685 --rc geninfo_all_blocks=1 00:03:49.685 --rc geninfo_unexecuted_blocks=1 00:03:49.685 00:03:49.685 ' 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.685 --rc genhtml_branch_coverage=1 00:03:49.685 --rc genhtml_function_coverage=1 00:03:49.685 --rc genhtml_legend=1 00:03:49.685 --rc geninfo_all_blocks=1 00:03:49.685 --rc geninfo_unexecuted_blocks=1 00:03:49.685 00:03:49.685 ' 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.685 --rc genhtml_branch_coverage=1 00:03:49.685 --rc genhtml_function_coverage=1 00:03:49.685 --rc genhtml_legend=1 00:03:49.685 --rc geninfo_all_blocks=1 00:03:49.685 --rc geninfo_unexecuted_blocks=1 00:03:49.685 00:03:49.685 ' 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.685 --rc genhtml_branch_coverage=1 00:03:49.685 --rc genhtml_function_coverage=1 00:03:49.685 --rc genhtml_legend=1 00:03:49.685 --rc geninfo_all_blocks=1 00:03:49.685 --rc geninfo_unexecuted_blocks=1 00:03:49.685 00:03:49.685 ' 00:03:49.685 13:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:49.685 13:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2491726 00:03:49.685 13:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2491726 00:03:49.685 13:53:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2491726 ']' 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.685 13:53:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:49.685 [2024-12-05 13:53:55.970883] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:49.685 [2024-12-05 13:53:55.970956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2491726 ] 00:03:49.945 [2024-12-05 13:53:56.057106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.945 [2024-12-05 13:53:56.091599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.515 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.515 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:50.515 13:53:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:50.515 13:53:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:50.515 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.515 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:50.515 { 00:03:50.515 "filename": "/tmp/spdk_mem_dump.txt" 00:03:50.515 } 00:03:50.515 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.516 13:53:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:50.777 DPDK memory size 818.000000 MiB in 1 heap(s) 00:03:50.777 1 heaps totaling size 818.000000 MiB 00:03:50.777 size: 818.000000 MiB heap id: 0 00:03:50.777 end heaps---------- 00:03:50.777 9 mempools totaling size 603.782043 MiB 00:03:50.777 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:50.777 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:50.777 size: 100.555481 MiB name: bdev_io_2491726 00:03:50.777 size: 50.003479 MiB name: msgpool_2491726 00:03:50.777 size: 36.509338 MiB name: fsdev_io_2491726 00:03:50.777 size: 21.763794 MiB name: PDU_Pool 00:03:50.777 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:50.777 size: 4.133484 MiB name: evtpool_2491726 00:03:50.777 size: 0.026123 MiB name: Session_Pool 00:03:50.777 end mempools------- 00:03:50.777 6 memzones totaling size 4.142822 MiB 00:03:50.777 size: 1.000366 MiB name: RG_ring_0_2491726 00:03:50.777 size: 1.000366 MiB name: RG_ring_1_2491726 00:03:50.777 size: 1.000366 MiB name: RG_ring_4_2491726 00:03:50.777 size: 1.000366 MiB name: RG_ring_5_2491726 00:03:50.777 size: 0.125366 MiB name: RG_ring_2_2491726 00:03:50.777 size: 0.015991 MiB name: RG_ring_3_2491726 00:03:50.777 end memzones------- 00:03:50.777 13:53:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:50.777 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:50.777 list of free elements. size: 10.852478 MiB 00:03:50.777 element at address: 0x200019200000 with size: 0.999878 MiB 00:03:50.777 element at address: 0x200019400000 with size: 0.999878 MiB 00:03:50.777 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:50.777 element at address: 0x200032000000 with size: 0.994446 MiB 00:03:50.777 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:50.777 element at address: 0x200012c00000 with size: 0.944275 MiB 00:03:50.777 element at address: 0x200019600000 with size: 0.936584 MiB 00:03:50.777 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:50.777 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:03:50.777 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:50.777 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:50.777 element at address: 0x200019800000 with size: 0.485657 MiB 00:03:50.777 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:50.777 element at address: 0x200028200000 with size: 0.410034 MiB 00:03:50.777 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:50.777 list of standard malloc elements. size: 199.218628 MiB 00:03:50.777 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:50.777 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:50.777 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:03:50.777 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:03:50.777 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:03:50.777 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:50.777 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:03:50.777 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:50.777 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:03:50.777 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:03:50.777 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200028268f80 with size: 0.000183 MiB 00:03:50.777 element at address: 0x200028269040 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:03:50.777 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:03:50.777 list of memzone associated elements. size: 607.928894 MiB 00:03:50.777 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:03:50.778 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:50.778 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:03:50.778 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:50.778 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:03:50.778 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2491726_0 00:03:50.778 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:50.778 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2491726_0 00:03:50.778 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:50.778 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2491726_0 00:03:50.778 element at address: 0x2000199be940 with size: 20.255554 MiB 00:03:50.778 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:50.778 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:03:50.778 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:50.778 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:50.778 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2491726_0 00:03:50.778 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:50.778 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2491726 00:03:50.778 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:50.778 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2491726 00:03:50.778 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:50.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:50.778 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:03:50.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:50.778 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:50.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:50.778 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:50.778 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:50.778 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:50.778 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2491726 00:03:50.778 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:50.778 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2491726 00:03:50.778 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:03:50.778 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2491726 00:03:50.778 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:03:50.778 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2491726 00:03:50.778 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:50.778 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2491726 00:03:50.778 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:50.778 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2491726 00:03:50.778 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:50.778 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:50.778 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:50.778 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:50.778 element at address: 0x20001987c540 with size: 0.250488 MiB 00:03:50.778 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:50.778 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:50.778 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2491726 00:03:50.778 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:50.778 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2491726 00:03:50.778 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:50.778 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:50.778 element at address: 0x200028269100 with size: 0.023743 MiB 00:03:50.778 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:50.778 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:50.778 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2491726 00:03:50.778 element at address: 0x20002826f240 with size: 0.002441 MiB 00:03:50.778 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:50.778 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:50.778 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2491726 00:03:50.778 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:50.778 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2491726 00:03:50.778 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:50.778 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2491726 00:03:50.778 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:03:50.778 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:50.778 13:53:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:50.778 13:53:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2491726 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2491726 ']' 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2491726 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2491726 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2491726' 00:03:50.778 killing process with pid 2491726 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2491726 00:03:50.778 13:53:56 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2491726 00:03:51.038 00:03:51.038 real 0m1.390s 00:03:51.038 user 0m1.452s 00:03:51.038 sys 0m0.423s 00:03:51.038 13:53:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.038 13:53:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:51.038 ************************************ 00:03:51.038 END TEST dpdk_mem_utility 00:03:51.038 ************************************ 00:03:51.038 13:53:57 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:51.038 13:53:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.038 13:53:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.038 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:03:51.038 ************************************ 00:03:51.038 START TEST event 00:03:51.038 ************************************ 00:03:51.038 13:53:57 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:51.038 * Looking for test storage... 00:03:51.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:51.038 13:53:57 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.038 13:53:57 event -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.038 13:53:57 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.299 13:53:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.299 13:53:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.299 13:53:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.299 13:53:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.299 13:53:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.299 13:53:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.299 13:53:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.299 13:53:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.299 13:53:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.299 13:53:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.299 13:53:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.299 13:53:57 event -- scripts/common.sh@344 -- # case "$op" in 00:03:51.299 13:53:57 event -- scripts/common.sh@345 -- # : 1 00:03:51.299 13:53:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.299 13:53:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.299 13:53:57 event -- scripts/common.sh@365 -- # decimal 1 00:03:51.299 13:53:57 event -- scripts/common.sh@353 -- # local d=1 00:03:51.299 13:53:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.299 13:53:57 event -- scripts/common.sh@355 -- # echo 1 00:03:51.299 13:53:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.299 13:53:57 event -- scripts/common.sh@366 -- # decimal 2 00:03:51.299 13:53:57 event -- scripts/common.sh@353 -- # local d=2 00:03:51.299 13:53:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.299 13:53:57 event -- scripts/common.sh@355 -- # echo 2 00:03:51.299 13:53:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.299 13:53:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.299 13:53:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.299 13:53:57 event -- scripts/common.sh@368 -- # return 0 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.299 --rc genhtml_branch_coverage=1 00:03:51.299 --rc genhtml_function_coverage=1 00:03:51.299 --rc genhtml_legend=1 00:03:51.299 --rc geninfo_all_blocks=1 00:03:51.299 --rc geninfo_unexecuted_blocks=1 00:03:51.299 00:03:51.299 ' 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.299 --rc genhtml_branch_coverage=1 00:03:51.299 --rc genhtml_function_coverage=1 00:03:51.299 --rc genhtml_legend=1 00:03:51.299 --rc geninfo_all_blocks=1 00:03:51.299 --rc geninfo_unexecuted_blocks=1 00:03:51.299 00:03:51.299 ' 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.299 --rc genhtml_branch_coverage=1 00:03:51.299 --rc genhtml_function_coverage=1 00:03:51.299 --rc genhtml_legend=1 00:03:51.299 --rc geninfo_all_blocks=1 00:03:51.299 --rc geninfo_unexecuted_blocks=1 00:03:51.299 00:03:51.299 ' 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.299 --rc genhtml_branch_coverage=1 00:03:51.299 --rc genhtml_function_coverage=1 00:03:51.299 --rc genhtml_legend=1 00:03:51.299 --rc geninfo_all_blocks=1 00:03:51.299 --rc geninfo_unexecuted_blocks=1 00:03:51.299 00:03:51.299 ' 00:03:51.299 13:53:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:51.299 13:53:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:51.299 13:53:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:51.299 13:53:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.299 13:53:57 event -- common/autotest_common.sh@10 -- # set +x 00:03:51.299 ************************************ 00:03:51.299 START TEST event_perf 00:03:51.299 ************************************ 00:03:51.299 13:53:57 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:51.299 Running I/O for 1 seconds...[2024-12-05 13:53:57.436729] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:51.299 [2024-12-05 13:53:57.436830] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492130 ] 00:03:51.299 [2024-12-05 13:53:57.529686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:51.299 [2024-12-05 13:53:57.572066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:51.299 [2024-12-05 13:53:57.572222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:51.300 [2024-12-05 13:53:57.572375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.300 Running I/O for 1 seconds...[2024-12-05 13:53:57.572376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:52.681 00:03:52.681 lcore 0: 175801 00:03:52.681 lcore 1: 175804 00:03:52.681 lcore 2: 175804 00:03:52.681 lcore 3: 175802 00:03:52.681 done. 00:03:52.681 00:03:52.681 real 0m1.185s 00:03:52.681 user 0m4.099s 00:03:52.681 sys 0m0.085s 00:03:52.681 13:53:58 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.681 13:53:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:52.681 ************************************ 00:03:52.681 END TEST event_perf 00:03:52.681 ************************************ 00:03:52.681 13:53:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:52.681 13:53:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:52.681 13:53:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.681 13:53:58 event -- common/autotest_common.sh@10 -- # set +x 00:03:52.681 ************************************ 00:03:52.681 START TEST event_reactor 00:03:52.681 ************************************ 00:03:52.681 13:53:58 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:52.681 [2024-12-05 13:53:58.698366] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:52.682 [2024-12-05 13:53:58.698484] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492284 ] 00:03:52.682 [2024-12-05 13:53:58.789340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.682 [2024-12-05 13:53:58.827848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.620 test_start 00:03:53.620 oneshot 00:03:53.620 tick 100 00:03:53.620 tick 100 00:03:53.620 tick 250 00:03:53.620 tick 100 00:03:53.620 tick 100 00:03:53.620 tick 100 00:03:53.620 tick 250 00:03:53.620 tick 500 00:03:53.620 tick 100 00:03:53.620 tick 100 00:03:53.620 tick 250 00:03:53.620 tick 100 00:03:53.620 tick 100 00:03:53.620 test_end 00:03:53.620 00:03:53.620 real 0m1.176s 00:03:53.620 user 0m1.089s 00:03:53.620 sys 0m0.083s 00:03:53.620 13:53:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.620 13:53:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:53.620 ************************************ 00:03:53.620 END TEST event_reactor 00:03:53.620 ************************************ 00:03:53.620 13:53:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:53.620 13:53:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:53.620 13:53:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.620 13:53:59 event -- common/autotest_common.sh@10 -- # set +x 00:03:53.880 ************************************ 00:03:53.880 START TEST event_reactor_perf 00:03:53.880 ************************************ 00:03:53.880 13:53:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:53.880 [2024-12-05 13:53:59.956013] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:53.880 [2024-12-05 13:53:59.956108] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492520 ] 00:03:53.880 [2024-12-05 13:54:00.045525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.880 [2024-12-05 13:54:00.089410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.819 test_start 00:03:54.819 test_end 00:03:54.819 Performance: 540415 events per second 00:03:54.819 00:03:54.819 real 0m1.182s 00:03:54.819 user 0m1.095s 00:03:54.819 sys 0m0.082s 00:03:54.819 13:54:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.079 13:54:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:55.079 ************************************ 00:03:55.079 END TEST event_reactor_perf 00:03:55.079 ************************************ 00:03:55.079 13:54:01 event -- event/event.sh@49 -- # uname -s 00:03:55.079 13:54:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:55.079 13:54:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:55.079 13:54:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.079 13:54:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.079 13:54:01 event -- common/autotest_common.sh@10 -- # set +x 00:03:55.079 ************************************ 00:03:55.079 START TEST event_scheduler 00:03:55.079 ************************************ 00:03:55.079 13:54:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:55.079 * Looking for test storage... 00:03:55.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:55.079 13:54:01 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:55.079 13:54:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:03:55.079 13:54:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.339 13:54:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:55.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.339 --rc genhtml_branch_coverage=1 00:03:55.339 --rc genhtml_function_coverage=1 00:03:55.339 --rc genhtml_legend=1 00:03:55.339 --rc geninfo_all_blocks=1 00:03:55.339 --rc geninfo_unexecuted_blocks=1 00:03:55.339 00:03:55.339 ' 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:55.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.339 --rc genhtml_branch_coverage=1 00:03:55.339 --rc genhtml_function_coverage=1 00:03:55.339 --rc genhtml_legend=1 00:03:55.339 --rc geninfo_all_blocks=1 00:03:55.339 --rc geninfo_unexecuted_blocks=1 00:03:55.339 00:03:55.339 ' 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:55.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.339 --rc genhtml_branch_coverage=1 00:03:55.339 --rc genhtml_function_coverage=1 00:03:55.339 --rc genhtml_legend=1 00:03:55.339 --rc geninfo_all_blocks=1 00:03:55.339 --rc geninfo_unexecuted_blocks=1 00:03:55.339 00:03:55.339 ' 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:55.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.339 --rc genhtml_branch_coverage=1 00:03:55.339 --rc genhtml_function_coverage=1 00:03:55.339 --rc genhtml_legend=1 00:03:55.339 --rc geninfo_all_blocks=1 00:03:55.339 --rc geninfo_unexecuted_blocks=1 00:03:55.339 00:03:55.339 ' 00:03:55.339 13:54:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:55.339 13:54:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2492993 00:03:55.339 13:54:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.339 13:54:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2492993 00:03:55.339 13:54:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2492993 ']' 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.339 13:54:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:55.339 [2024-12-05 13:54:01.454704] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:03:55.339 [2024-12-05 13:54:01.454777] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492993 ] 00:03:55.339 [2024-12-05 13:54:01.536026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:55.339 [2024-12-05 13:54:01.592081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.339 [2024-12-05 13:54:01.592243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.339 [2024-12-05 13:54:01.592398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:55.339 [2024-12-05 13:54:01.592399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:56.278 13:54:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:56.278 13:54:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:56.278 13:54:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:56.278 13:54:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.278 13:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:56.278 [2024-12-05 13:54:02.343011] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:56.278 [2024-12-05 13:54:02.343030] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:56.278 [2024-12-05 13:54:02.343040] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:56.278 [2024-12-05 13:54:02.343046] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:56.278 [2024-12-05 13:54:02.343052] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 [2024-12-05 13:54:02.411284] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 ************************************ 00:03:56.279 START TEST scheduler_create_thread 00:03:56.279 ************************************ 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 2 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 3 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 4 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 5 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 6 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 7 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 8 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 9 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.279 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:56.849 10 00:03:56.849 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:56.849 13:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:56.849 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:56.849 13:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.234 13:54:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.234 13:54:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:58.234 13:54:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:58.234 13:54:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.234 13:54:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:59.175 13:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.175 13:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:59.175 13:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.175 13:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:59.744 13:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:59.744 13:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:59.744 13:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:59.744 13:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:59.744 13:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.685 13:54:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.685 00:04:00.685 real 0m4.225s 00:04:00.685 user 0m0.025s 00:04:00.685 sys 0m0.006s 00:04:00.685 13:54:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.685 13:54:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.685 ************************************ 00:04:00.685 END TEST scheduler_create_thread 00:04:00.685 ************************************ 00:04:00.685 13:54:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:00.685 13:54:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2492993 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2492993 ']' 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2492993 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2492993 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2492993' 00:04:00.685 killing process with pid 2492993 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2492993 00:04:00.685 13:54:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2492993 00:04:00.685 [2024-12-05 13:54:06.956940] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:00.946 00:04:00.946 real 0m5.915s 00:04:00.946 user 0m13.279s 00:04:00.946 sys 0m0.432s 00:04:00.946 13:54:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.946 13:54:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:00.946 ************************************ 00:04:00.946 END TEST event_scheduler 00:04:00.946 ************************************ 00:04:00.946 13:54:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:00.946 13:54:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:00.946 13:54:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.946 13:54:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.946 13:54:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:00.946 ************************************ 00:04:00.946 START TEST app_repeat 00:04:00.946 ************************************ 00:04:00.946 13:54:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2494244 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2494244' 00:04:00.947 Process app_repeat pid: 2494244 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:00.947 spdk_app_start Round 0 00:04:00.947 13:54:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2494244 /var/tmp/spdk-nbd.sock 00:04:00.947 13:54:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2494244 ']' 00:04:00.947 13:54:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:00.947 13:54:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.947 13:54:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:00.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:00.947 13:54:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.947 13:54:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:00.947 [2024-12-05 13:54:07.238383] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:00.947 [2024-12-05 13:54:07.238466] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494244 ] 00:04:01.206 [2024-12-05 13:54:07.326772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:01.206 [2024-12-05 13:54:07.368398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:01.206 [2024-12-05 13:54:07.368399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.206 13:54:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.206 13:54:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:01.206 13:54:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:01.466 Malloc0 00:04:01.466 13:54:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:01.726 Malloc1 00:04:01.726 13:54:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:01.726 13:54:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:01.985 /dev/nbd0 00:04:01.985 13:54:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:01.985 13:54:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:01.986 1+0 records in 00:04:01.986 1+0 records out 00:04:01.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271982 s, 15.1 MB/s 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:01.986 13:54:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:01.986 13:54:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:01.986 13:54:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:01.986 13:54:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:01.986 /dev/nbd1 00:04:01.986 13:54:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:02.245 13:54:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:02.245 13:54:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:02.246 1+0 records in 00:04:02.246 1+0 records out 00:04:02.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232714 s, 17.6 MB/s 00:04:02.246 13:54:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:02.246 13:54:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:02.246 13:54:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:02.246 13:54:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:02.246 13:54:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:02.246 { 00:04:02.246 "nbd_device": "/dev/nbd0", 00:04:02.246 "bdev_name": "Malloc0" 00:04:02.246 }, 00:04:02.246 { 00:04:02.246 "nbd_device": "/dev/nbd1", 00:04:02.246 "bdev_name": "Malloc1" 00:04:02.246 } 00:04:02.246 ]' 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:02.246 { 00:04:02.246 "nbd_device": "/dev/nbd0", 00:04:02.246 "bdev_name": "Malloc0" 00:04:02.246 }, 00:04:02.246 { 00:04:02.246 "nbd_device": "/dev/nbd1", 00:04:02.246 "bdev_name": "Malloc1" 00:04:02.246 } 00:04:02.246 ]' 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:02.246 /dev/nbd1' 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:02.246 /dev/nbd1' 00:04:02.246 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:02.506 256+0 records in 00:04:02.506 256+0 records out 00:04:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127077 s, 82.5 MB/s 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:02.506 256+0 records in 00:04:02.506 256+0 records out 00:04:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120525 s, 87.0 MB/s 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:02.506 256+0 records in 00:04:02.506 256+0 records out 00:04:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133815 s, 78.4 MB/s 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:02.506 13:54:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:02.766 13:54:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:02.766 13:54:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:02.766 13:54:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.766 13:54:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:03.025 13:54:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:03.025 13:54:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:03.284 13:54:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:03.284 [2024-12-05 13:54:09.511226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:03.284 [2024-12-05 13:54:09.539900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:03.284 [2024-12-05 13:54:09.539901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.284 [2024-12-05 13:54:09.568802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:03.284 [2024-12-05 13:54:09.568832] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:06.580 13:54:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:06.580 13:54:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:06.580 spdk_app_start Round 1 00:04:06.580 13:54:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2494244 /var/tmp/spdk-nbd.sock 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2494244 ']' 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:06.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.580 13:54:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:06.580 13:54:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:06.580 Malloc0 00:04:06.580 13:54:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:06.840 Malloc1 00:04:06.840 13:54:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:06.840 13:54:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:06.840 /dev/nbd0 00:04:07.100 13:54:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:07.100 13:54:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.100 1+0 records in 00:04:07.100 1+0 records out 00:04:07.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283472 s, 14.4 MB/s 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:07.100 13:54:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:07.100 13:54:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:07.101 /dev/nbd1 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.101 1+0 records in 00:04:07.101 1+0 records out 00:04:07.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285869 s, 14.3 MB/s 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:07.101 13:54:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.101 13:54:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:07.361 { 00:04:07.361 "nbd_device": "/dev/nbd0", 00:04:07.361 "bdev_name": "Malloc0" 00:04:07.361 }, 00:04:07.361 { 00:04:07.361 "nbd_device": "/dev/nbd1", 00:04:07.361 "bdev_name": "Malloc1" 00:04:07.361 } 00:04:07.361 ]' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:07.361 { 00:04:07.361 "nbd_device": "/dev/nbd0", 00:04:07.361 "bdev_name": "Malloc0" 00:04:07.361 }, 00:04:07.361 { 00:04:07.361 "nbd_device": "/dev/nbd1", 00:04:07.361 "bdev_name": "Malloc1" 00:04:07.361 } 00:04:07.361 ]' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:07.361 /dev/nbd1' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:07.361 /dev/nbd1' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:07.361 256+0 records in 00:04:07.361 256+0 records out 00:04:07.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012755 s, 82.2 MB/s 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.361 13:54:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:07.640 256+0 records in 00:04:07.640 256+0 records out 00:04:07.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124904 s, 84.0 MB/s 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:07.640 256+0 records in 00:04:07.640 256+0 records out 00:04:07.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127937 s, 82.0 MB/s 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:07.640 13:54:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.641 13:54:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.901 13:54:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:08.161 13:54:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:08.162 13:54:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:08.162 13:54:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:08.162 13:54:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:08.162 13:54:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:08.162 13:54:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:08.422 13:54:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:08.422 [2024-12-05 13:54:14.596701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.422 [2024-12-05 13:54:14.625601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.422 [2024-12-05 13:54:14.625602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.422 [2024-12-05 13:54:14.655005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:08.422 [2024-12-05 13:54:14.655038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:11.736 13:54:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:11.736 13:54:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:11.736 spdk_app_start Round 2 00:04:11.736 13:54:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2494244 /var/tmp/spdk-nbd.sock 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2494244 ']' 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:11.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.736 13:54:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:11.736 13:54:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.736 Malloc0 00:04:11.736 13:54:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.995 Malloc1 00:04:11.995 13:54:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:11.995 13:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.996 13:54:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:11.996 /dev/nbd0 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.255 1+0 records in 00:04:12.255 1+0 records out 00:04:12.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310603 s, 13.2 MB/s 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.255 /dev/nbd1 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.255 13:54:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.255 1+0 records in 00:04:12.255 1+0 records out 00:04:12.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164857 s, 24.8 MB/s 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.255 13:54:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.256 13:54:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.256 13:54:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.256 13:54:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.256 13:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.256 13:54:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.256 13:54:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.256 13:54:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.256 13:54:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.516 13:54:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:12.517 { 00:04:12.517 "nbd_device": "/dev/nbd0", 00:04:12.517 "bdev_name": "Malloc0" 00:04:12.517 }, 00:04:12.517 { 00:04:12.517 "nbd_device": "/dev/nbd1", 00:04:12.517 "bdev_name": "Malloc1" 00:04:12.517 } 00:04:12.517 ]' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:12.517 { 00:04:12.517 "nbd_device": "/dev/nbd0", 00:04:12.517 "bdev_name": "Malloc0" 00:04:12.517 }, 00:04:12.517 { 00:04:12.517 "nbd_device": "/dev/nbd1", 00:04:12.517 "bdev_name": "Malloc1" 00:04:12.517 } 00:04:12.517 ]' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:12.517 /dev/nbd1' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:12.517 /dev/nbd1' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:12.517 256+0 records in 00:04:12.517 256+0 records out 00:04:12.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127817 s, 82.0 MB/s 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:12.517 256+0 records in 00:04:12.517 256+0 records out 00:04:12.517 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118926 s, 88.2 MB/s 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:12.517 13:54:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:12.778 256+0 records in 00:04:12.778 256+0 records out 00:04:12.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139854 s, 75.0 MB/s 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.778 13:54:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.778 13:54:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.039 13:54:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:13.300 13:54:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:13.300 13:54:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:13.561 13:54:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:13.561 [2024-12-05 13:54:19.727587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.561 [2024-12-05 13:54:19.756485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.561 [2024-12-05 13:54:19.756485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.561 [2024-12-05 13:54:19.785412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:13.561 [2024-12-05 13:54:19.785442] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:16.874 13:54:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2494244 /var/tmp/spdk-nbd.sock 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2494244 ']' 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:16.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:16.874 13:54:22 event.app_repeat -- event/event.sh@39 -- # killprocess 2494244 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2494244 ']' 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2494244 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494244 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494244' 00:04:16.874 killing process with pid 2494244 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2494244 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2494244 00:04:16.874 spdk_app_start is called in Round 0. 00:04:16.874 Shutdown signal received, stop current app iteration 00:04:16.874 Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 reinitialization... 00:04:16.874 spdk_app_start is called in Round 1. 00:04:16.874 Shutdown signal received, stop current app iteration 00:04:16.874 Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 reinitialization... 00:04:16.874 spdk_app_start is called in Round 2. 00:04:16.874 Shutdown signal received, stop current app iteration 00:04:16.874 Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 reinitialization... 00:04:16.874 spdk_app_start is called in Round 3. 00:04:16.874 Shutdown signal received, stop current app iteration 00:04:16.874 13:54:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:16.874 13:54:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:16.874 00:04:16.874 real 0m15.784s 00:04:16.874 user 0m34.900s 00:04:16.874 sys 0m2.223s 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.874 13:54:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:16.874 ************************************ 00:04:16.874 END TEST app_repeat 00:04:16.874 ************************************ 00:04:16.874 13:54:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:16.874 13:54:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:16.874 13:54:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.874 13:54:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.874 13:54:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:16.874 ************************************ 00:04:16.874 START TEST cpu_locks 00:04:16.874 ************************************ 00:04:16.874 13:54:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:16.874 * Looking for test storage... 00:04:16.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:16.874 13:54:23 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.874 13:54:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.874 13:54:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:17.135 13:54:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.135 13:54:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:17.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.136 --rc genhtml_branch_coverage=1 00:04:17.136 --rc genhtml_function_coverage=1 00:04:17.136 --rc genhtml_legend=1 00:04:17.136 --rc geninfo_all_blocks=1 00:04:17.136 --rc geninfo_unexecuted_blocks=1 00:04:17.136 00:04:17.136 ' 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:17.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.136 --rc genhtml_branch_coverage=1 00:04:17.136 --rc genhtml_function_coverage=1 00:04:17.136 --rc genhtml_legend=1 00:04:17.136 --rc geninfo_all_blocks=1 00:04:17.136 --rc geninfo_unexecuted_blocks=1 00:04:17.136 00:04:17.136 ' 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:17.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.136 --rc genhtml_branch_coverage=1 00:04:17.136 --rc genhtml_function_coverage=1 00:04:17.136 --rc genhtml_legend=1 00:04:17.136 --rc geninfo_all_blocks=1 00:04:17.136 --rc geninfo_unexecuted_blocks=1 00:04:17.136 00:04:17.136 ' 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:17.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.136 --rc genhtml_branch_coverage=1 00:04:17.136 --rc genhtml_function_coverage=1 00:04:17.136 --rc genhtml_legend=1 00:04:17.136 --rc geninfo_all_blocks=1 00:04:17.136 --rc geninfo_unexecuted_blocks=1 00:04:17.136 00:04:17.136 ' 00:04:17.136 13:54:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:17.136 13:54:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:17.136 13:54:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:17.136 13:54:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.136 13:54:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:17.136 ************************************ 00:04:17.136 START TEST default_locks 00:04:17.136 ************************************ 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2498131 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2498131 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2498131 ']' 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.136 13:54:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:17.136 [2024-12-05 13:54:23.363041] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:17.136 [2024-12-05 13:54:23.363105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498131 ] 00:04:17.396 [2024-12-05 13:54:23.450607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.396 [2024-12-05 13:54:23.485727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.966 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.966 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:17.966 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2498131 00:04:17.966 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2498131 00:04:17.966 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:18.226 lslocks: write error 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2498131 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2498131 ']' 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2498131 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498131 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498131' 00:04:18.226 killing process with pid 2498131 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2498131 00:04:18.226 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2498131 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2498131 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2498131 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2498131 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2498131 ']' 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:18.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2498131) - No such process 00:04:18.487 ERROR: process (pid: 2498131) is no longer running 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:18.487 00:04:18.487 real 0m1.237s 00:04:18.487 user 0m1.342s 00:04:18.487 sys 0m0.406s 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.487 13:54:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:18.487 ************************************ 00:04:18.487 END TEST default_locks 00:04:18.487 ************************************ 00:04:18.487 13:54:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:18.487 13:54:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.487 13:54:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.487 13:54:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:18.487 ************************************ 00:04:18.487 START TEST default_locks_via_rpc 00:04:18.487 ************************************ 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2498493 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2498493 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2498493 ']' 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.487 13:54:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.487 [2024-12-05 13:54:24.675349] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:18.487 [2024-12-05 13:54:24.675401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498493 ] 00:04:18.487 [2024-12-05 13:54:24.758551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.747 [2024-12-05 13:54:24.791576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2498493 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2498493 00:04:19.317 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:19.910 13:54:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2498493 00:04:19.910 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2498493 ']' 00:04:19.910 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2498493 00:04:19.910 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:19.910 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.910 13:54:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498493 00:04:19.910 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.910 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.910 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498493' 00:04:19.910 killing process with pid 2498493 00:04:19.910 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2498493 00:04:19.910 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2498493 00:04:20.264 00:04:20.264 real 0m1.618s 00:04:20.264 user 0m1.731s 00:04:20.264 sys 0m0.575s 00:04:20.264 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.264 13:54:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.264 ************************************ 00:04:20.264 END TEST default_locks_via_rpc 00:04:20.264 ************************************ 00:04:20.264 13:54:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:20.264 13:54:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.264 13:54:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.264 13:54:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.264 ************************************ 00:04:20.264 START TEST non_locking_app_on_locked_coremask 00:04:20.264 ************************************ 00:04:20.264 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:20.264 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2498860 00:04:20.264 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2498860 /var/tmp/spdk.sock 00:04:20.264 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.265 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2498860 ']' 00:04:20.265 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.265 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:20.265 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.265 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:20.265 13:54:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:20.265 [2024-12-05 13:54:26.368778] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:20.265 [2024-12-05 13:54:26.368827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498860 ] 00:04:20.265 [2024-12-05 13:54:26.453852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.265 [2024-12-05 13:54:26.484862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2498880 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2498880 /var/tmp/spdk2.sock 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2498880 ']' 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:21.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.204 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:21.204 [2024-12-05 13:54:27.185173] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:21.204 [2024-12-05 13:54:27.185224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498880 ] 00:04:21.204 [2024-12-05 13:54:27.272615] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:21.204 [2024-12-05 13:54:27.272641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.204 [2024-12-05 13:54:27.331198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.776 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:21.776 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:21.776 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2498860 00:04:21.776 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2498860 00:04:21.776 13:54:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:22.347 lslocks: write error 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2498860 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2498860 ']' 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2498860 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498860 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498860' 00:04:22.347 killing process with pid 2498860 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2498860 00:04:22.347 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2498860 00:04:22.608 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2498880 00:04:22.608 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2498880 ']' 00:04:22.608 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2498880 00:04:22.608 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:22.608 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.608 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498880 00:04:22.868 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.868 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.868 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498880' 00:04:22.868 killing process with pid 2498880 00:04:22.868 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2498880 00:04:22.868 13:54:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2498880 00:04:22.868 00:04:22.868 real 0m2.826s 00:04:22.868 user 0m3.151s 00:04:22.868 sys 0m0.837s 00:04:22.868 13:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.868 13:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:22.868 ************************************ 00:04:22.868 END TEST non_locking_app_on_locked_coremask 00:04:22.868 ************************************ 00:04:23.130 13:54:29 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:23.130 13:54:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.130 13:54:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.130 13:54:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:23.130 ************************************ 00:04:23.130 START TEST locking_app_on_unlocked_coremask 00:04:23.130 ************************************ 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2499421 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2499421 /var/tmp/spdk.sock 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2499421 ']' 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.130 13:54:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:23.130 [2024-12-05 13:54:29.274846] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:23.131 [2024-12-05 13:54:29.274904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499421 ] 00:04:23.131 [2024-12-05 13:54:29.361063] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:23.131 [2024-12-05 13:54:29.361094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.131 [2024-12-05 13:54:29.402264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2499582 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2499582 /var/tmp/spdk2.sock 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2499582 ']' 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:24.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.090 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:24.090 [2024-12-05 13:54:30.118073] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:24.090 [2024-12-05 13:54:30.118129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499582 ] 00:04:24.090 [2024-12-05 13:54:30.207039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.090 [2024-12-05 13:54:30.265236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.661 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.661 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:24.661 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2499582 00:04:24.661 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2499582 00:04:24.661 13:54:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:24.921 lslocks: write error 00:04:24.921 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2499421 00:04:24.921 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2499421 ']' 00:04:24.921 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2499421 00:04:24.921 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:24.921 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.921 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499421 00:04:25.182 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.182 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.182 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499421' 00:04:25.182 killing process with pid 2499421 00:04:25.182 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2499421 00:04:25.182 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2499421 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2499582 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2499582 ']' 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2499582 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499582 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499582' 00:04:25.443 killing process with pid 2499582 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2499582 00:04:25.443 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2499582 00:04:25.703 00:04:25.703 real 0m2.621s 00:04:25.703 user 0m2.925s 00:04:25.703 sys 0m0.785s 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.703 ************************************ 00:04:25.703 END TEST locking_app_on_unlocked_coremask 00:04:25.703 ************************************ 00:04:25.703 13:54:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:25.703 13:54:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.703 13:54:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.703 13:54:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.703 ************************************ 00:04:25.703 START TEST locking_app_on_locked_coremask 00:04:25.703 ************************************ 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2499958 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2499958 /var/tmp/spdk.sock 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2499958 ']' 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.703 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.704 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.704 13:54:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.704 [2024-12-05 13:54:31.967969] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:25.704 [2024-12-05 13:54:31.968019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499958 ] 00:04:25.963 [2024-12-05 13:54:32.051222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.963 [2024-12-05 13:54:32.081464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.533 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.533 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:26.533 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2500214 00:04:26.533 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2500214 /var/tmp/spdk2.sock 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2500214 /var/tmp/spdk2.sock 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2500214 /var/tmp/spdk2.sock 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2500214 ']' 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:26.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.534 13:54:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:26.534 [2024-12-05 13:54:32.822633] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:26.534 [2024-12-05 13:54:32.822705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500214 ] 00:04:26.794 [2024-12-05 13:54:32.929523] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2499958 has claimed it. 00:04:26.794 [2024-12-05 13:54:32.929560] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:27.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2500214) - No such process 00:04:27.364 ERROR: process (pid: 2500214) is no longer running 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2499958 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2499958 00:04:27.364 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:27.624 lslocks: write error 00:04:27.625 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2499958 00:04:27.625 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2499958 ']' 00:04:27.625 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2499958 00:04:27.625 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:27.625 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.625 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499958 00:04:27.886 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.886 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.886 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499958' 00:04:27.886 killing process with pid 2499958 00:04:27.886 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2499958 00:04:27.886 13:54:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2499958 00:04:27.886 00:04:27.886 real 0m2.244s 00:04:27.886 user 0m2.533s 00:04:27.886 sys 0m0.664s 00:04:27.886 13:54:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.886 13:54:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.886 ************************************ 00:04:27.886 END TEST locking_app_on_locked_coremask 00:04:27.886 ************************************ 00:04:28.147 13:54:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:28.147 13:54:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.147 13:54:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.147 13:54:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.147 ************************************ 00:04:28.147 START TEST locking_overlapped_coremask 00:04:28.147 ************************************ 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2500476 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2500476 /var/tmp/spdk.sock 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2500476 ']' 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.147 13:54:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.147 [2024-12-05 13:54:34.287304] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:28.147 [2024-12-05 13:54:34.287366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500476 ] 00:04:28.147 [2024-12-05 13:54:34.372655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:28.147 [2024-12-05 13:54:34.405281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.147 [2024-12-05 13:54:34.405428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.147 [2024-12-05 13:54:34.405430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.087 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.087 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2500666 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2500666 /var/tmp/spdk2.sock 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2500666 /var/tmp/spdk2.sock 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2500666 /var/tmp/spdk2.sock 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2500666 ']' 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:29.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.105 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.105 [2024-12-05 13:54:35.134677] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:29.105 [2024-12-05 13:54:35.134729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500666 ] 00:04:29.105 [2024-12-05 13:54:35.247050] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2500476 has claimed it. 00:04:29.105 [2024-12-05 13:54:35.247092] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:29.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2500666) - No such process 00:04:29.675 ERROR: process (pid: 2500666) is no longer running 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2500476 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2500476 ']' 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2500476 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500476 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.675 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500476' 00:04:29.676 killing process with pid 2500476 00:04:29.676 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2500476 00:04:29.676 13:54:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2500476 00:04:29.936 00:04:29.936 real 0m1.787s 00:04:29.936 user 0m5.177s 00:04:29.936 sys 0m0.379s 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.936 ************************************ 00:04:29.936 END TEST locking_overlapped_coremask 00:04:29.936 ************************************ 00:04:29.936 13:54:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:29.936 13:54:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.936 13:54:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.936 13:54:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.936 ************************************ 00:04:29.936 START TEST locking_overlapped_coremask_via_rpc 00:04:29.936 ************************************ 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2500928 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2500928 /var/tmp/spdk.sock 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2500928 ']' 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.936 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.936 [2024-12-05 13:54:36.148831] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:29.936 [2024-12-05 13:54:36.148891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500928 ] 00:04:30.196 [2024-12-05 13:54:36.237807] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:30.196 [2024-12-05 13:54:36.237845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:30.196 [2024-12-05 13:54:36.280256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.196 [2024-12-05 13:54:36.280407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.196 [2024-12-05 13:54:36.280408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2501047 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2501047 /var/tmp/spdk2.sock 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2501047 ']' 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:30.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.767 13:54:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.767 [2024-12-05 13:54:37.002846] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:30.767 [2024-12-05 13:54:37.002901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501047 ] 00:04:31.028 [2024-12-05 13:54:37.113873] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.028 [2024-12-05 13:54:37.113904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:31.028 [2024-12-05 13:54:37.191946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:31.028 [2024-12-05 13:54:37.192103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.028 [2024-12-05 13:54:37.192105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:31.599 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.599 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.599 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:31.599 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.600 [2024-12-05 13:54:37.813533] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2500928 has claimed it. 00:04:31.600 request: 00:04:31.600 { 00:04:31.600 "method": "framework_enable_cpumask_locks", 00:04:31.600 "req_id": 1 00:04:31.600 } 00:04:31.600 Got JSON-RPC error response 00:04:31.600 response: 00:04:31.600 { 00:04:31.600 "code": -32603, 00:04:31.600 "message": "Failed to claim CPU core: 2" 00:04:31.600 } 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2500928 /var/tmp/spdk.sock 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2500928 ']' 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.600 13:54:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2501047 /var/tmp/spdk2.sock 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2501047 ']' 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.862 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:32.122 00:04:32.122 real 0m2.099s 00:04:32.122 user 0m0.861s 00:04:32.122 sys 0m0.163s 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.122 13:54:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.122 ************************************ 00:04:32.122 END TEST locking_overlapped_coremask_via_rpc 00:04:32.122 ************************************ 00:04:32.122 13:54:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:32.122 13:54:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2500928 ]] 00:04:32.122 13:54:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2500928 00:04:32.122 13:54:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2500928 ']' 00:04:32.122 13:54:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2500928 00:04:32.122 13:54:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500928 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500928' 00:04:32.123 killing process with pid 2500928 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2500928 00:04:32.123 13:54:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2500928 00:04:32.383 13:54:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2501047 ]] 00:04:32.383 13:54:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2501047 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2501047 ']' 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2501047 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501047 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501047' 00:04:32.383 killing process with pid 2501047 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2501047 00:04:32.383 13:54:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2501047 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2500928 ]] 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2500928 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2500928 ']' 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2500928 00:04:32.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2500928) - No such process 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2500928 is not found' 00:04:32.644 Process with pid 2500928 is not found 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2501047 ]] 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2501047 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2501047 ']' 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2501047 00:04:32.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2501047) - No such process 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2501047 is not found' 00:04:32.644 Process with pid 2501047 is not found 00:04:32.644 13:54:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:32.644 00:04:32.644 real 0m15.737s 00:04:32.644 user 0m27.871s 00:04:32.644 sys 0m4.785s 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.644 13:54:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.644 ************************************ 00:04:32.644 END TEST cpu_locks 00:04:32.644 ************************************ 00:04:32.644 00:04:32.644 real 0m41.661s 00:04:32.644 user 1m22.619s 00:04:32.644 sys 0m8.122s 00:04:32.644 13:54:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.644 13:54:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:32.644 ************************************ 00:04:32.644 END TEST event 00:04:32.644 ************************************ 00:04:32.644 13:54:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:32.644 13:54:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.644 13:54:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.644 13:54:38 -- common/autotest_common.sh@10 -- # set +x 00:04:32.644 ************************************ 00:04:32.644 START TEST thread 00:04:32.644 ************************************ 00:04:32.644 13:54:38 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:32.905 * Looking for test storage... 00:04:32.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:32.905 13:54:39 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.905 13:54:39 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.905 13:54:39 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.905 13:54:39 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.905 13:54:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.905 13:54:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.905 13:54:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.905 13:54:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.905 13:54:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.905 13:54:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.905 13:54:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.905 13:54:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.905 13:54:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.905 13:54:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.905 13:54:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.905 13:54:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:32.905 13:54:39 thread -- scripts/common.sh@345 -- # : 1 00:04:32.905 13:54:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.905 13:54:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.905 13:54:39 thread -- scripts/common.sh@365 -- # decimal 1 00:04:32.905 13:54:39 thread -- scripts/common.sh@353 -- # local d=1 00:04:32.905 13:54:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.905 13:54:39 thread -- scripts/common.sh@355 -- # echo 1 00:04:32.905 13:54:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.905 13:54:39 thread -- scripts/common.sh@366 -- # decimal 2 00:04:32.905 13:54:39 thread -- scripts/common.sh@353 -- # local d=2 00:04:32.905 13:54:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.905 13:54:39 thread -- scripts/common.sh@355 -- # echo 2 00:04:32.905 13:54:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.905 13:54:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.905 13:54:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.905 13:54:39 thread -- scripts/common.sh@368 -- # return 0 00:04:32.905 13:54:39 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.905 13:54:39 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.905 --rc genhtml_branch_coverage=1 00:04:32.905 --rc genhtml_function_coverage=1 00:04:32.905 --rc genhtml_legend=1 00:04:32.905 --rc geninfo_all_blocks=1 00:04:32.905 --rc geninfo_unexecuted_blocks=1 00:04:32.905 00:04:32.905 ' 00:04:32.906 13:54:39 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.906 --rc genhtml_branch_coverage=1 00:04:32.906 --rc genhtml_function_coverage=1 00:04:32.906 --rc genhtml_legend=1 00:04:32.906 --rc geninfo_all_blocks=1 00:04:32.906 --rc geninfo_unexecuted_blocks=1 00:04:32.906 00:04:32.906 ' 00:04:32.906 13:54:39 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.906 --rc genhtml_branch_coverage=1 00:04:32.906 --rc genhtml_function_coverage=1 00:04:32.906 --rc genhtml_legend=1 00:04:32.906 --rc geninfo_all_blocks=1 00:04:32.906 --rc geninfo_unexecuted_blocks=1 00:04:32.906 00:04:32.906 ' 00:04:32.906 13:54:39 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.906 --rc genhtml_branch_coverage=1 00:04:32.906 --rc genhtml_function_coverage=1 00:04:32.906 --rc genhtml_legend=1 00:04:32.906 --rc geninfo_all_blocks=1 00:04:32.906 --rc geninfo_unexecuted_blocks=1 00:04:32.906 00:04:32.906 ' 00:04:32.906 13:54:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:32.906 13:54:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:32.906 13:54:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.906 13:54:39 thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.906 ************************************ 00:04:32.906 START TEST thread_poller_perf 00:04:32.906 ************************************ 00:04:32.906 13:54:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:32.906 [2024-12-05 13:54:39.186289] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:32.906 [2024-12-05 13:54:39.186403] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501535 ] 00:04:33.167 [2024-12-05 13:54:39.274608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.167 [2024-12-05 13:54:39.317242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.167 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:34.106 [2024-12-05T12:54:40.406Z] ====================================== 00:04:34.106 [2024-12-05T12:54:40.406Z] busy:2407035532 (cyc) 00:04:34.106 [2024-12-05T12:54:40.406Z] total_run_count: 418000 00:04:34.106 [2024-12-05T12:54:40.406Z] tsc_hz: 2400000000 (cyc) 00:04:34.106 [2024-12-05T12:54:40.406Z] ====================================== 00:04:34.106 [2024-12-05T12:54:40.406Z] poller_cost: 5758 (cyc), 2399 (nsec) 00:04:34.106 00:04:34.106 real 0m1.187s 00:04:34.106 user 0m1.098s 00:04:34.106 sys 0m0.084s 00:04:34.106 13:54:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.106 13:54:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:34.106 ************************************ 00:04:34.106 END TEST thread_poller_perf 00:04:34.106 ************************************ 00:04:34.106 13:54:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:34.106 13:54:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:34.106 13:54:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.106 13:54:40 thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.367 ************************************ 00:04:34.367 START TEST thread_poller_perf 00:04:34.367 ************************************ 00:04:34.367 13:54:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:34.367 [2024-12-05 13:54:40.453263] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:34.367 [2024-12-05 13:54:40.453360] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501844 ] 00:04:34.367 [2024-12-05 13:54:40.541825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.367 [2024-12-05 13:54:40.580951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.367 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:35.750 [2024-12-05T12:54:42.050Z] ====================================== 00:04:35.750 [2024-12-05T12:54:42.050Z] busy:2401664266 (cyc) 00:04:35.750 [2024-12-05T12:54:42.050Z] total_run_count: 5562000 00:04:35.750 [2024-12-05T12:54:42.050Z] tsc_hz: 2400000000 (cyc) 00:04:35.750 [2024-12-05T12:54:42.050Z] ====================================== 00:04:35.750 [2024-12-05T12:54:42.050Z] poller_cost: 431 (cyc), 179 (nsec) 00:04:35.750 00:04:35.750 real 0m1.176s 00:04:35.750 user 0m1.093s 00:04:35.750 sys 0m0.079s 00:04:35.750 13:54:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.750 13:54:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.750 ************************************ 00:04:35.750 END TEST thread_poller_perf 00:04:35.750 ************************************ 00:04:35.750 13:54:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:35.750 00:04:35.750 real 0m2.727s 00:04:35.750 user 0m2.361s 00:04:35.750 sys 0m0.378s 00:04:35.750 13:54:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.750 13:54:41 thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.750 ************************************ 00:04:35.750 END TEST thread 00:04:35.750 ************************************ 00:04:35.750 13:54:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:35.750 13:54:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:35.750 13:54:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.750 13:54:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.750 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.750 ************************************ 00:04:35.750 START TEST app_cmdline 00:04:35.750 ************************************ 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:35.750 * Looking for test storage... 00:04:35.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.750 13:54:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.750 --rc genhtml_branch_coverage=1 00:04:35.750 --rc genhtml_function_coverage=1 00:04:35.750 --rc genhtml_legend=1 00:04:35.750 --rc geninfo_all_blocks=1 00:04:35.750 --rc geninfo_unexecuted_blocks=1 00:04:35.750 00:04:35.750 ' 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.750 --rc genhtml_branch_coverage=1 00:04:35.750 --rc genhtml_function_coverage=1 00:04:35.750 --rc genhtml_legend=1 00:04:35.750 --rc geninfo_all_blocks=1 00:04:35.750 --rc geninfo_unexecuted_blocks=1 00:04:35.750 00:04:35.750 ' 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.750 --rc genhtml_branch_coverage=1 00:04:35.750 --rc genhtml_function_coverage=1 00:04:35.750 --rc genhtml_legend=1 00:04:35.750 --rc geninfo_all_blocks=1 00:04:35.750 --rc geninfo_unexecuted_blocks=1 00:04:35.750 00:04:35.750 ' 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.750 --rc genhtml_branch_coverage=1 00:04:35.750 --rc genhtml_function_coverage=1 00:04:35.750 --rc genhtml_legend=1 00:04:35.750 --rc geninfo_all_blocks=1 00:04:35.750 --rc geninfo_unexecuted_blocks=1 00:04:35.750 00:04:35.750 ' 00:04:35.750 13:54:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:35.750 13:54:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2502249 00:04:35.750 13:54:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2502249 00:04:35.750 13:54:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2502249 ']' 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.750 13:54:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:35.750 [2024-12-05 13:54:41.985111] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:35.750 [2024-12-05 13:54:41.985181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502249 ] 00:04:36.011 [2024-12-05 13:54:42.072607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.011 [2024-12-05 13:54:42.112093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.581 13:54:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.581 13:54:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:36.581 13:54:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:36.843 { 00:04:36.843 "version": "SPDK v25.01-pre git sha1 2bcaf03f7", 00:04:36.843 "fields": { 00:04:36.843 "major": 25, 00:04:36.843 "minor": 1, 00:04:36.843 "patch": 0, 00:04:36.843 "suffix": "-pre", 00:04:36.843 "commit": "2bcaf03f7" 00:04:36.843 } 00:04:36.843 } 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:36.843 13:54:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:36.843 13:54:42 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:37.104 request: 00:04:37.104 { 00:04:37.104 "method": "env_dpdk_get_mem_stats", 00:04:37.104 "req_id": 1 00:04:37.104 } 00:04:37.104 Got JSON-RPC error response 00:04:37.104 response: 00:04:37.104 { 00:04:37.104 "code": -32601, 00:04:37.104 "message": "Method not found" 00:04:37.104 } 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.104 13:54:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2502249 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2502249 ']' 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2502249 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2502249 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2502249' 00:04:37.104 killing process with pid 2502249 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@973 -- # kill 2502249 00:04:37.104 13:54:43 app_cmdline -- common/autotest_common.sh@978 -- # wait 2502249 00:04:37.365 00:04:37.365 real 0m1.686s 00:04:37.365 user 0m2.003s 00:04:37.365 sys 0m0.464s 00:04:37.365 13:54:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.365 13:54:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:37.365 ************************************ 00:04:37.365 END TEST app_cmdline 00:04:37.365 ************************************ 00:04:37.365 13:54:43 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:37.365 13:54:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.365 13:54:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.365 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.365 ************************************ 00:04:37.365 START TEST version 00:04:37.365 ************************************ 00:04:37.365 13:54:43 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:37.365 * Looking for test storage... 00:04:37.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:37.365 13:54:43 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.365 13:54:43 version -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.365 13:54:43 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.625 13:54:43 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.625 13:54:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.625 13:54:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.625 13:54:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.625 13:54:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.625 13:54:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.625 13:54:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.625 13:54:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.625 13:54:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.625 13:54:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.625 13:54:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.625 13:54:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.625 13:54:43 version -- scripts/common.sh@344 -- # case "$op" in 00:04:37.625 13:54:43 version -- scripts/common.sh@345 -- # : 1 00:04:37.625 13:54:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.625 13:54:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.625 13:54:43 version -- scripts/common.sh@365 -- # decimal 1 00:04:37.625 13:54:43 version -- scripts/common.sh@353 -- # local d=1 00:04:37.625 13:54:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.625 13:54:43 version -- scripts/common.sh@355 -- # echo 1 00:04:37.625 13:54:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.625 13:54:43 version -- scripts/common.sh@366 -- # decimal 2 00:04:37.625 13:54:43 version -- scripts/common.sh@353 -- # local d=2 00:04:37.625 13:54:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.625 13:54:43 version -- scripts/common.sh@355 -- # echo 2 00:04:37.625 13:54:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.625 13:54:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.625 13:54:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.626 13:54:43 version -- scripts/common.sh@368 -- # return 0 00:04:37.626 13:54:43 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.626 13:54:43 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 13:54:43 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 13:54:43 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 13:54:43 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.626 --rc genhtml_branch_coverage=1 00:04:37.626 --rc genhtml_function_coverage=1 00:04:37.626 --rc genhtml_legend=1 00:04:37.626 --rc geninfo_all_blocks=1 00:04:37.626 --rc geninfo_unexecuted_blocks=1 00:04:37.626 00:04:37.626 ' 00:04:37.626 13:54:43 version -- app/version.sh@17 -- # get_header_version major 00:04:37.626 13:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # cut -f2 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:04:37.626 13:54:43 version -- app/version.sh@17 -- # major=25 00:04:37.626 13:54:43 version -- app/version.sh@18 -- # get_header_version minor 00:04:37.626 13:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # cut -f2 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:04:37.626 13:54:43 version -- app/version.sh@18 -- # minor=1 00:04:37.626 13:54:43 version -- app/version.sh@19 -- # get_header_version patch 00:04:37.626 13:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # cut -f2 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:04:37.626 13:54:43 version -- app/version.sh@19 -- # patch=0 00:04:37.626 13:54:43 version -- app/version.sh@20 -- # get_header_version suffix 00:04:37.626 13:54:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # cut -f2 00:04:37.626 13:54:43 version -- app/version.sh@14 -- # tr -d '"' 00:04:37.626 13:54:43 version -- app/version.sh@20 -- # suffix=-pre 00:04:37.626 13:54:43 version -- app/version.sh@22 -- # version=25.1 00:04:37.626 13:54:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:37.626 13:54:43 version -- app/version.sh@28 -- # version=25.1rc0 00:04:37.626 13:54:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:37.626 13:54:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:37.626 13:54:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:37.626 13:54:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:37.626 00:04:37.626 real 0m0.276s 00:04:37.626 user 0m0.166s 00:04:37.626 sys 0m0.157s 00:04:37.626 13:54:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.626 13:54:43 version -- common/autotest_common.sh@10 -- # set +x 00:04:37.626 ************************************ 00:04:37.626 END TEST version 00:04:37.626 ************************************ 00:04:37.626 13:54:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:37.626 13:54:43 -- spdk/autotest.sh@194 -- # uname -s 00:04:37.626 13:54:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:37.626 13:54:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:37.626 13:54:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:37.626 13:54:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:37.626 13:54:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:37.626 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.626 13:54:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:37.626 13:54:43 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:37.626 13:54:43 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:37.626 13:54:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:37.626 13:54:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.626 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.626 ************************************ 00:04:37.626 START TEST nvmf_tcp 00:04:37.626 ************************************ 00:04:37.626 13:54:43 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:37.887 * Looking for test storage... 00:04:37.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:37.887 13:54:43 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.887 13:54:43 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.887 13:54:43 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.887 13:54:44 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:37.887 13:54:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.888 13:54:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.888 --rc genhtml_branch_coverage=1 00:04:37.888 --rc genhtml_function_coverage=1 00:04:37.888 --rc genhtml_legend=1 00:04:37.888 --rc geninfo_all_blocks=1 00:04:37.888 --rc geninfo_unexecuted_blocks=1 00:04:37.888 00:04:37.888 ' 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.888 --rc genhtml_branch_coverage=1 00:04:37.888 --rc genhtml_function_coverage=1 00:04:37.888 --rc genhtml_legend=1 00:04:37.888 --rc geninfo_all_blocks=1 00:04:37.888 --rc geninfo_unexecuted_blocks=1 00:04:37.888 00:04:37.888 ' 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.888 --rc genhtml_branch_coverage=1 00:04:37.888 --rc genhtml_function_coverage=1 00:04:37.888 --rc genhtml_legend=1 00:04:37.888 --rc geninfo_all_blocks=1 00:04:37.888 --rc geninfo_unexecuted_blocks=1 00:04:37.888 00:04:37.888 ' 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.888 --rc genhtml_branch_coverage=1 00:04:37.888 --rc genhtml_function_coverage=1 00:04:37.888 --rc genhtml_legend=1 00:04:37.888 --rc geninfo_all_blocks=1 00:04:37.888 --rc geninfo_unexecuted_blocks=1 00:04:37.888 00:04:37.888 ' 00:04:37.888 13:54:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:37.888 13:54:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:37.888 13:54:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.888 13:54:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.888 ************************************ 00:04:37.888 START TEST nvmf_target_core 00:04:37.888 ************************************ 00:04:37.888 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:38.149 * Looking for test storage... 00:04:38.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.149 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.150 --rc genhtml_branch_coverage=1 00:04:38.150 --rc genhtml_function_coverage=1 00:04:38.150 --rc genhtml_legend=1 00:04:38.150 --rc geninfo_all_blocks=1 00:04:38.150 --rc geninfo_unexecuted_blocks=1 00:04:38.150 00:04:38.150 ' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.150 --rc genhtml_branch_coverage=1 00:04:38.150 --rc genhtml_function_coverage=1 00:04:38.150 --rc genhtml_legend=1 00:04:38.150 --rc geninfo_all_blocks=1 00:04:38.150 --rc geninfo_unexecuted_blocks=1 00:04:38.150 00:04:38.150 ' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.150 --rc genhtml_branch_coverage=1 00:04:38.150 --rc genhtml_function_coverage=1 00:04:38.150 --rc genhtml_legend=1 00:04:38.150 --rc geninfo_all_blocks=1 00:04:38.150 --rc geninfo_unexecuted_blocks=1 00:04:38.150 00:04:38.150 ' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.150 --rc genhtml_branch_coverage=1 00:04:38.150 --rc genhtml_function_coverage=1 00:04:38.150 --rc genhtml_legend=1 00:04:38.150 --rc geninfo_all_blocks=1 00:04:38.150 --rc geninfo_unexecuted_blocks=1 00:04:38.150 00:04:38.150 ' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:38.150 ************************************ 00:04:38.150 START TEST nvmf_abort 00:04:38.150 ************************************ 00:04:38.150 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:38.412 * Looking for test storage... 00:04:38.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.412 --rc genhtml_branch_coverage=1 00:04:38.412 --rc genhtml_function_coverage=1 00:04:38.412 --rc genhtml_legend=1 00:04:38.412 --rc geninfo_all_blocks=1 00:04:38.412 --rc geninfo_unexecuted_blocks=1 00:04:38.412 00:04:38.412 ' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.412 --rc genhtml_branch_coverage=1 00:04:38.412 --rc genhtml_function_coverage=1 00:04:38.412 --rc genhtml_legend=1 00:04:38.412 --rc geninfo_all_blocks=1 00:04:38.412 --rc geninfo_unexecuted_blocks=1 00:04:38.412 00:04:38.412 ' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.412 --rc genhtml_branch_coverage=1 00:04:38.412 --rc genhtml_function_coverage=1 00:04:38.412 --rc genhtml_legend=1 00:04:38.412 --rc geninfo_all_blocks=1 00:04:38.412 --rc geninfo_unexecuted_blocks=1 00:04:38.412 00:04:38.412 ' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.412 --rc genhtml_branch_coverage=1 00:04:38.412 --rc genhtml_function_coverage=1 00:04:38.412 --rc genhtml_legend=1 00:04:38.412 --rc geninfo_all_blocks=1 00:04:38.412 --rc geninfo_unexecuted_blocks=1 00:04:38.412 00:04:38.412 ' 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.412 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:38.413 13:54:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:04:46.566 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:04:46.566 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:04:46.566 Found net devices under 0000:4b:00.0: cvl_0_0 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:04:46.566 Found net devices under 0000:4b:00.1: cvl_0_1 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.566 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:46.567 13:54:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:46.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:46.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:04:46.567 00:04:46.567 --- 10.0.0.2 ping statistics --- 00:04:46.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:46.567 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:46.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:46.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:04:46.567 00:04:46.567 --- 10.0.0.1 ping statistics --- 00:04:46.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:46.567 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2506736 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2506736 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2506736 ']' 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.567 13:54:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.567 [2024-12-05 13:54:52.227437] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:04:46.567 [2024-12-05 13:54:52.227509] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:46.567 [2024-12-05 13:54:52.325724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.567 [2024-12-05 13:54:52.379419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:46.567 [2024-12-05 13:54:52.379479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:46.567 [2024-12-05 13:54:52.379489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:46.567 [2024-12-05 13:54:52.379496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:46.567 [2024-12-05 13:54:52.379502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:46.567 [2024-12-05 13:54:52.381308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.567 [2024-12-05 13:54:52.381492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.567 [2024-12-05 13:54:52.381556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.829 [2024-12-05 13:54:53.106263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.829 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 Malloc0 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 Delay0 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 [2024-12-05 13:54:53.193788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.091 13:54:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:47.091 [2024-12-05 13:54:53.386649] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:49.638 Initializing NVMe Controllers 00:04:49.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:49.638 controller IO queue size 128 less than required 00:04:49.638 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:49.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:49.638 Initialization complete. Launching workers. 00:04:49.638 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28484 00:04:49.638 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28549, failed to submit 62 00:04:49.638 success 28488, unsuccessful 61, failed 0 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:49.638 rmmod nvme_tcp 00:04:49.638 rmmod nvme_fabrics 00:04:49.638 rmmod nvme_keyring 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2506736 ']' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2506736 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2506736 ']' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2506736 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506736 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506736' 00:04:49.638 killing process with pid 2506736 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2506736 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2506736 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:49.638 13:54:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:52.181 00:04:52.181 real 0m13.440s 00:04:52.181 user 0m14.151s 00:04:52.181 sys 0m6.668s 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:52.181 ************************************ 00:04:52.181 END TEST nvmf_abort 00:04:52.181 ************************************ 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.181 13:54:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:52.181 ************************************ 00:04:52.182 START TEST nvmf_ns_hotplug_stress 00:04:52.182 ************************************ 00:04:52.182 13:54:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:52.182 * Looking for test storage... 00:04:52.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.182 --rc genhtml_branch_coverage=1 00:04:52.182 --rc genhtml_function_coverage=1 00:04:52.182 --rc genhtml_legend=1 00:04:52.182 --rc geninfo_all_blocks=1 00:04:52.182 --rc geninfo_unexecuted_blocks=1 00:04:52.182 00:04:52.182 ' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.182 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:52.183 13:54:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:00.320 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:00.320 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:00.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:00.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:00.320 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:00.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:00.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:05:00.321 00:05:00.321 --- 10.0.0.2 ping statistics --- 00:05:00.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:00.321 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:00.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:00.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:05:00.321 00:05:00.321 --- 10.0.0.1 ping statistics --- 00:05:00.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:00.321 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2511688 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2511688 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2511688 ']' 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.321 13:55:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.321 [2024-12-05 13:55:05.749605] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:05:00.321 [2024-12-05 13:55:05.749668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:00.321 [2024-12-05 13:55:05.848215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.321 [2024-12-05 13:55:05.901122] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:00.321 [2024-12-05 13:55:05.901172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:00.321 [2024-12-05 13:55:05.901181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.321 [2024-12-05 13:55:05.901189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.321 [2024-12-05 13:55:05.901195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:00.321 [2024-12-05 13:55:05.903080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.321 [2024-12-05 13:55:05.903242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.321 [2024-12-05 13:55:05.903243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.321 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.321 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:00.321 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:00.321 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.321 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.580 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:00.580 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:00.580 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:00.580 [2024-12-05 13:55:06.792772] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.580 13:55:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:00.842 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:01.102 [2024-12-05 13:55:07.191793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:01.102 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:01.363 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:01.363 Malloc0 00:05:01.623 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:01.623 Delay0 00:05:01.623 13:55:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.882 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:02.143 NULL1 00:05:02.143 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:02.143 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:02.143 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2512153 00:05:02.143 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:02.143 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.403 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.663 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:02.663 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:02.663 true 00:05:02.663 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:02.663 13:55:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.923 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.184 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:03.184 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:03.184 true 00:05:03.444 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:03.444 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.444 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.704 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:03.704 13:55:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:03.965 true 00:05:03.965 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:03.965 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.965 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.226 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:04.226 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:04.487 true 00:05:04.487 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:04.487 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.487 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.748 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:04.748 13:55:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:05.010 true 00:05:05.010 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:05.010 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.271 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.271 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:05.271 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:05.532 true 00:05:05.532 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:05.532 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.792 13:55:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.792 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:05.792 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:06.052 true 00:05:06.052 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:06.052 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.312 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.312 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:06.312 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:06.572 true 00:05:06.572 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:06.572 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.832 13:55:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.832 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:06.832 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:07.092 true 00:05:07.092 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:07.092 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.352 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.352 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:07.352 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:07.612 true 00:05:07.612 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:07.612 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.871 13:55:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.871 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:07.871 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:08.131 true 00:05:08.131 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:08.131 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.390 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.649 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:08.649 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:08.649 true 00:05:08.649 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:08.649 13:55:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.909 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.167 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:09.167 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:09.167 true 00:05:09.167 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:09.167 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.426 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.687 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:09.687 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:09.687 true 00:05:09.946 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:09.946 13:55:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.946 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.206 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:10.206 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:10.466 true 00:05:10.466 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:10.466 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.466 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.772 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:10.772 13:55:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:10.772 true 00:05:11.095 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:11.095 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.095 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.375 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:11.375 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:11.375 true 00:05:11.375 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:11.375 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.634 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.893 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:11.894 13:55:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:11.894 true 00:05:11.894 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:11.894 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.153 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.414 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:12.414 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:12.414 true 00:05:12.675 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:12.675 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.675 13:55:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.936 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:12.937 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:13.197 true 00:05:13.197 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:13.197 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.197 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.456 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:13.456 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:13.716 true 00:05:13.716 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:13.716 13:55:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.975 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.975 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:13.975 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:14.235 true 00:05:14.235 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:14.235 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.495 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.495 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:14.495 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:14.755 true 00:05:14.755 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:14.755 13:55:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.015 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.274 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:15.274 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:15.274 true 00:05:15.274 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:15.274 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.533 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.792 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:15.792 13:55:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:15.792 true 00:05:15.792 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:15.792 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.052 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.311 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:16.311 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:16.311 true 00:05:16.571 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:16.571 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.571 13:55:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.830 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:16.830 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:17.089 true 00:05:17.089 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:17.089 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.089 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.347 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:17.347 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:17.607 true 00:05:17.607 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:17.607 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.867 13:55:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.867 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:17.867 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:18.127 true 00:05:18.127 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:18.127 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.387 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.387 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:18.387 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:18.647 true 00:05:18.647 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:18.647 13:55:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.907 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.167 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:19.167 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:19.167 true 00:05:19.167 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:19.167 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.427 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.688 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:19.688 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:19.688 true 00:05:19.688 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:19.688 13:55:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.947 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.208 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:20.208 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:20.208 true 00:05:20.468 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:20.468 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.468 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.729 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:20.729 13:55:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:20.989 true 00:05:20.989 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:20.989 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.989 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.250 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:21.250 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:21.511 true 00:05:21.511 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:21.511 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.771 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.771 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:21.771 13:55:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:22.032 true 00:05:22.032 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:22.032 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.293 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.293 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:22.293 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:22.553 true 00:05:22.553 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:22.553 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.814 13:55:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.814 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:22.814 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:23.074 true 00:05:23.074 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:23.074 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.333 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.594 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:23.594 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:23.594 true 00:05:23.594 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:23.594 13:55:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.856 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.117 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:24.118 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:24.118 true 00:05:24.118 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:24.118 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.377 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.636 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:24.636 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:24.636 true 00:05:24.956 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:24.956 13:55:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.956 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.216 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:25.216 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:25.216 true 00:05:25.477 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:25.477 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.477 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.737 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:25.737 13:55:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:25.996 true 00:05:25.996 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:25.996 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.996 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.254 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:26.254 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:26.514 true 00:05:26.514 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:26.514 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.514 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.774 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:26.774 13:55:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:27.034 true 00:05:27.034 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:27.034 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.294 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.294 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:27.294 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:05:27.554 true 00:05:27.554 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:27.554 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.815 13:55:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.815 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:05:27.815 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:05:28.074 true 00:05:28.075 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:28.075 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.366 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.366 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:05:28.366 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:05:28.626 true 00:05:28.626 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:28.626 13:55:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.886 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.145 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:05:29.145 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:05:29.145 true 00:05:29.145 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:29.145 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.406 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.665 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:05:29.665 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:05:29.665 true 00:05:29.665 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:29.665 13:55:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.926 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.186 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:05:30.186 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:05:30.186 true 00:05:30.447 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:30.447 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.447 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.707 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:05:30.707 13:55:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:05:30.967 true 00:05:30.967 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:30.967 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.967 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.228 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:05:31.228 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:05:31.488 true 00:05:31.488 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:31.488 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.488 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.749 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:05:31.749 13:55:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:05:32.010 true 00:05:32.010 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:32.010 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.271 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.271 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:05:32.271 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:05:32.534 true 00:05:32.534 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:32.534 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.534 Initializing NVMe Controllers 00:05:32.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:32.534 Controller IO queue size 128, less than required. 00:05:32.534 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:32.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:32.534 Initialization complete. Launching workers. 00:05:32.534 ======================================================== 00:05:32.534 Latency(us) 00:05:32.534 Device Information : IOPS MiB/s Average min max 00:05:32.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31067.36 15.17 4120.15 1138.55 10331.22 00:05:32.534 ======================================================== 00:05:32.534 Total : 31067.36 15.17 4120.15 1138.55 10331.22 00:05:32.534 00:05:32.793 13:55:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.793 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:05:32.793 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:05:33.062 true 00:05:33.062 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512153 00:05:33.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2512153) - No such process 00:05:33.062 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2512153 00:05:33.062 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.323 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.323 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:33.323 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:33.323 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:33.323 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.323 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:33.585 null0 00:05:33.585 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.585 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.585 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:33.845 null1 00:05:33.845 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.845 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.845 13:55:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:33.845 null2 00:05:34.104 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.104 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.104 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:34.104 null3 00:05:34.104 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.104 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.104 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:34.363 null4 00:05:34.363 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.363 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.363 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:34.624 null5 00:05:34.624 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.624 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.624 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:34.624 null6 00:05:34.624 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.624 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.624 13:55:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:34.884 null7 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2518950 2518952 2518955 2518958 2518961 2518964 2518966 2518969 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.884 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.145 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.407 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.669 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.670 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.670 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.670 13:55:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.931 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.191 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.452 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.716 13:55:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.977 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.977 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.977 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.978 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.239 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.499 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.759 13:55:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.760 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.760 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.760 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.760 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.020 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.282 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.283 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.543 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:38.803 rmmod nvme_tcp 00:05:38.803 rmmod nvme_fabrics 00:05:38.803 rmmod nvme_keyring 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:38.803 13:55:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2511688 ']' 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2511688 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2511688 ']' 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2511688 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511688 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511688' 00:05:38.803 killing process with pid 2511688 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2511688 00:05:38.803 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2511688 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:39.065 13:55:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:40.979 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:40.979 00:05:40.979 real 0m49.314s 00:05:40.979 user 3m20.904s 00:05:40.979 sys 0m17.854s 00:05:40.979 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.979 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:40.979 ************************************ 00:05:40.979 END TEST nvmf_ns_hotplug_stress 00:05:40.979 ************************************ 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:41.240 ************************************ 00:05:41.240 START TEST nvmf_delete_subsystem 00:05:41.240 ************************************ 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:41.240 * Looking for test storage... 00:05:41.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.240 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.503 --rc genhtml_branch_coverage=1 00:05:41.503 --rc genhtml_function_coverage=1 00:05:41.503 --rc genhtml_legend=1 00:05:41.503 --rc geninfo_all_blocks=1 00:05:41.503 --rc geninfo_unexecuted_blocks=1 00:05:41.503 00:05:41.503 ' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.503 --rc genhtml_branch_coverage=1 00:05:41.503 --rc genhtml_function_coverage=1 00:05:41.503 --rc genhtml_legend=1 00:05:41.503 --rc geninfo_all_blocks=1 00:05:41.503 --rc geninfo_unexecuted_blocks=1 00:05:41.503 00:05:41.503 ' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.503 --rc genhtml_branch_coverage=1 00:05:41.503 --rc genhtml_function_coverage=1 00:05:41.503 --rc genhtml_legend=1 00:05:41.503 --rc geninfo_all_blocks=1 00:05:41.503 --rc geninfo_unexecuted_blocks=1 00:05:41.503 00:05:41.503 ' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.503 --rc genhtml_branch_coverage=1 00:05:41.503 --rc genhtml_function_coverage=1 00:05:41.503 --rc genhtml_legend=1 00:05:41.503 --rc geninfo_all_blocks=1 00:05:41.503 --rc geninfo_unexecuted_blocks=1 00:05:41.503 00:05:41.503 ' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:41.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:41.503 13:55:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:49.641 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:49.642 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:49.642 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:49.642 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:49.642 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:49.642 13:55:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:49.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:49.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:05:49.642 00:05:49.642 --- 10.0.0.2 ping statistics --- 00:05:49.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:49.642 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:49.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:49.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:05:49.642 00:05:49.642 --- 10.0.0.1 ping statistics --- 00:05:49.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:49.642 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2524236 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2524236 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2524236 ']' 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.642 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 [2024-12-05 13:55:55.144883] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:05:49.643 [2024-12-05 13:55:55.144952] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:49.643 [2024-12-05 13:55:55.221582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.643 [2024-12-05 13:55:55.267421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:49.643 [2024-12-05 13:55:55.267491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:49.643 [2024-12-05 13:55:55.267498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:49.643 [2024-12-05 13:55:55.267503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:49.643 [2024-12-05 13:55:55.267507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:49.643 [2024-12-05 13:55:55.270482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.643 [2024-12-05 13:55:55.270506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 [2024-12-05 13:55:55.424216] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 [2024-12-05 13:55:55.448572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 NULL1 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 Delay0 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2524257 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:49.643 13:55:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:49.643 [2024-12-05 13:55:55.575461] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:51.556 13:55:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:51.556 13:55:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.556 13:55:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 starting I/O failed: -6 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 starting I/O failed: -6 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 starting I/O failed: -6 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 starting I/O failed: -6 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 starting I/O failed: -6 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 starting I/O failed: -6 00:05:51.556 Write completed with error (sct=0, sc=8) 00:05:51.556 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 [2024-12-05 13:55:57.701132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd862c0 is same with the state(6) to be set 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 starting I/O failed: -6 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 [2024-12-05 13:55:57.705448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58c8000c40 is same with the state(6) to be set 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Write completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:51.557 Read completed with error (sct=0, sc=8) 00:05:52.500 [2024-12-05 13:55:58.676040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd879b0 is same with the state(6) to be set 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 [2024-12-05 13:55:58.704290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd864a0 is same with the state(6) to be set 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 [2024-12-05 13:55:58.705013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd86860 is same with the state(6) to be set 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 [2024-12-05 13:55:58.707550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58c800d7c0 is same with the state(6) to be set 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Write completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 Read completed with error (sct=0, sc=8) 00:05:52.500 [2024-12-05 13:55:58.707703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58c800d020 is same with the state(6) to be set 00:05:52.500 Initializing NVMe Controllers 00:05:52.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:52.500 Controller IO queue size 128, less than required. 00:05:52.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:52.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:52.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:52.500 Initialization complete. Launching workers. 00:05:52.500 ======================================================== 00:05:52.500 Latency(us) 00:05:52.500 Device Information : IOPS MiB/s Average min max 00:05:52.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.76 0.08 890289.21 406.51 1006944.53 00:05:52.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.25 0.08 891659.14 377.99 1011666.61 00:05:52.501 ======================================================== 00:05:52.501 Total : 344.01 0.17 890975.17 377.99 1011666.61 00:05:52.501 00:05:52.501 [2024-12-05 13:55:58.708241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd879b0 (9): Bad file descriptor 00:05:52.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:52.501 13:55:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.501 13:55:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:52.501 13:55:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2524257 00:05:52.501 13:55:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2524257 00:05:53.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2524257) - No such process 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2524257 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2524257 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2524257 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:53.071 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.072 [2024-12-05 13:55:59.241118] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2524943 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:53.072 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:53.072 [2024-12-05 13:55:59.346051] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:53.642 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:53.642 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:53.642 13:55:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.277 13:56:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.277 13:56:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:54.277 13:56:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.542 13:56:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.542 13:56:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:54.542 13:56:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.110 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.110 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:55.110 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.679 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.679 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:55.679 13:56:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.250 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.250 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:56.250 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.250 Initializing NVMe Controllers 00:05:56.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:56.250 Controller IO queue size 128, less than required. 00:05:56.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:56.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:56.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:56.250 Initialization complete. Launching workers. 00:05:56.250 ======================================================== 00:05:56.250 Latency(us) 00:05:56.250 Device Information : IOPS MiB/s Average min max 00:05:56.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002295.19 1000119.04 1006429.59 00:05:56.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002874.35 1000305.65 1007995.52 00:05:56.250 ======================================================== 00:05:56.250 Total : 256.00 0.12 1002584.77 1000119.04 1007995.52 00:05:56.250 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2524943 00:05:56.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2524943) - No such process 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2524943 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:56.511 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:56.511 rmmod nvme_tcp 00:05:56.772 rmmod nvme_fabrics 00:05:56.772 rmmod nvme_keyring 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2524236 ']' 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2524236 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2524236 ']' 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2524236 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524236 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524236' 00:05:56.772 killing process with pid 2524236 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2524236 00:05:56.772 13:56:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2524236 00:05:56.772 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:56.772 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:56.772 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:56.772 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:05:56.772 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:05:56.772 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:56.773 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:05:56.773 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:56.773 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:56.773 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:56.773 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:56.773 13:56:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:59.316 00:05:59.316 real 0m17.765s 00:05:59.316 user 0m29.561s 00:05:59.316 sys 0m6.752s 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.316 ************************************ 00:05:59.316 END TEST nvmf_delete_subsystem 00:05:59.316 ************************************ 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:59.316 ************************************ 00:05:59.316 START TEST nvmf_host_management 00:05:59.316 ************************************ 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:59.316 * Looking for test storage... 00:05:59.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.316 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.317 --rc genhtml_branch_coverage=1 00:05:59.317 --rc genhtml_function_coverage=1 00:05:59.317 --rc genhtml_legend=1 00:05:59.317 --rc geninfo_all_blocks=1 00:05:59.317 --rc geninfo_unexecuted_blocks=1 00:05:59.317 00:05:59.317 ' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.317 --rc genhtml_branch_coverage=1 00:05:59.317 --rc genhtml_function_coverage=1 00:05:59.317 --rc genhtml_legend=1 00:05:59.317 --rc geninfo_all_blocks=1 00:05:59.317 --rc geninfo_unexecuted_blocks=1 00:05:59.317 00:05:59.317 ' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.317 --rc genhtml_branch_coverage=1 00:05:59.317 --rc genhtml_function_coverage=1 00:05:59.317 --rc genhtml_legend=1 00:05:59.317 --rc geninfo_all_blocks=1 00:05:59.317 --rc geninfo_unexecuted_blocks=1 00:05:59.317 00:05:59.317 ' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.317 --rc genhtml_branch_coverage=1 00:05:59.317 --rc genhtml_function_coverage=1 00:05:59.317 --rc genhtml_legend=1 00:05:59.317 --rc geninfo_all_blocks=1 00:05:59.317 --rc geninfo_unexecuted_blocks=1 00:05:59.317 00:05:59.317 ' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:59.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.317 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:59.318 13:56:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:07.451 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:07.451 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:07.451 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:07.452 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:07.452 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:07.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:06:07.452 00:06:07.452 --- 10.0.0.2 ping statistics --- 00:06:07.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.452 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:06:07.452 00:06:07.452 --- 10.0.0.1 ping statistics --- 00:06:07.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.452 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2529965 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2529965 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2529965 ']' 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.452 13:56:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.452 [2024-12-05 13:56:12.949489] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:06:07.452 [2024-12-05 13:56:12.949577] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.452 [2024-12-05 13:56:13.048759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.452 [2024-12-05 13:56:13.101636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.452 [2024-12-05 13:56:13.101685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.452 [2024-12-05 13:56:13.101695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.452 [2024-12-05 13:56:13.101703] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.452 [2024-12-05 13:56:13.101711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.452 [2024-12-05 13:56:13.103768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.452 [2024-12-05 13:56:13.103929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.452 [2024-12-05 13:56:13.104093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.452 [2024-12-05 13:56:13.104094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:07.713 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.713 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:07.713 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 [2024-12-05 13:56:13.821521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 Malloc0 00:06:07.714 [2024-12-05 13:56:13.901888] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2530300 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2530300 /var/tmp/bdevperf.sock 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2530300 ']' 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:07.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:07.714 { 00:06:07.714 "params": { 00:06:07.714 "name": "Nvme$subsystem", 00:06:07.714 "trtype": "$TEST_TRANSPORT", 00:06:07.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:07.714 "adrfam": "ipv4", 00:06:07.714 "trsvcid": "$NVMF_PORT", 00:06:07.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:07.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:07.714 "hdgst": ${hdgst:-false}, 00:06:07.714 "ddgst": ${ddgst:-false} 00:06:07.714 }, 00:06:07.714 "method": "bdev_nvme_attach_controller" 00:06:07.714 } 00:06:07.714 EOF 00:06:07.714 )") 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:07.714 13:56:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:07.714 "params": { 00:06:07.714 "name": "Nvme0", 00:06:07.714 "trtype": "tcp", 00:06:07.714 "traddr": "10.0.0.2", 00:06:07.714 "adrfam": "ipv4", 00:06:07.714 "trsvcid": "4420", 00:06:07.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:07.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:07.714 "hdgst": false, 00:06:07.714 "ddgst": false 00:06:07.714 }, 00:06:07.714 "method": "bdev_nvme_attach_controller" 00:06:07.714 }' 00:06:07.974 [2024-12-05 13:56:14.013471] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:06:07.974 [2024-12-05 13:56:14.013544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530300 ] 00:06:07.974 [2024-12-05 13:56:14.106316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.974 [2024-12-05 13:56:14.159938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.235 Running I/O for 10 seconds... 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.808 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=759 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 759 -ge 100 ']' 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.809 [2024-12-05 13:56:14.905466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.905584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93dab0 is same with the state(6) to be set 00:06:08.809 [2024-12-05 13:56:14.908388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:08.809 [2024-12-05 13:56:14.908427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.908438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:08.809 [2024-12-05 13:56:14.908445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.908461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:08.809 [2024-12-05 13:56:14.908469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.908478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:08.809 [2024-12-05 13:56:14.908485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.908493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453010 is same with the state(6) to be set 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.809 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.809 [2024-12-05 13:56:14.916222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.809 [2024-12-05 13:56:14.916666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.809 [2024-12-05 13:56:14.916673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.916989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.916996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.917327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:08.810 [2024-12-05 13:56:14.917336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:08.810 [2024-12-05 13:56:14.918591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:08.811 task offset: 105216 on job bdev=Nvme0n1 fails 00:06:08.811 00:06:08.811 Latency(us) 00:06:08.811 [2024-12-05T12:56:15.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.811 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:08.811 Job: Nvme0n1 ended in about 0.55 seconds with error 00:06:08.811 Verification LBA range: start 0x0 length 0x400 00:06:08.811 Nvme0n1 : 0.55 1506.06 94.13 117.26 0.00 38440.10 1645.23 35607.89 00:06:08.811 [2024-12-05T12:56:15.111Z] =================================================================================================================== 00:06:08.811 [2024-12-05T12:56:15.111Z] Total : 1506.06 94.13 117.26 0.00 38440.10 1645.23 35607.89 00:06:08.811 [2024-12-05 13:56:14.920603] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.811 [2024-12-05 13:56:14.920626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2453010 (9): Bad file descriptor 00:06:08.811 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.811 13:56:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:08.811 [2024-12-05 13:56:14.970861] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2530300 00:06:09.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2530300) - No such process 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:09.747 { 00:06:09.747 "params": { 00:06:09.747 "name": "Nvme$subsystem", 00:06:09.747 "trtype": "$TEST_TRANSPORT", 00:06:09.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:09.747 "adrfam": "ipv4", 00:06:09.747 "trsvcid": "$NVMF_PORT", 00:06:09.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:09.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:09.747 "hdgst": ${hdgst:-false}, 00:06:09.747 "ddgst": ${ddgst:-false} 00:06:09.747 }, 00:06:09.747 "method": "bdev_nvme_attach_controller" 00:06:09.747 } 00:06:09.747 EOF 00:06:09.747 )") 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:09.747 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:09.748 13:56:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:09.748 "params": { 00:06:09.748 "name": "Nvme0", 00:06:09.748 "trtype": "tcp", 00:06:09.748 "traddr": "10.0.0.2", 00:06:09.748 "adrfam": "ipv4", 00:06:09.748 "trsvcid": "4420", 00:06:09.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:09.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:09.748 "hdgst": false, 00:06:09.748 "ddgst": false 00:06:09.748 }, 00:06:09.748 "method": "bdev_nvme_attach_controller" 00:06:09.748 }' 00:06:09.748 [2024-12-05 13:56:15.984359] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:06:09.748 [2024-12-05 13:56:15.984415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2530690 ] 00:06:10.007 [2024-12-05 13:56:16.072813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.007 [2024-12-05 13:56:16.107650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.007 Running I/O for 1 seconds... 00:06:11.391 1543.00 IOPS, 96.44 MiB/s 00:06:11.391 Latency(us) 00:06:11.391 [2024-12-05T12:56:17.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.391 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:11.391 Verification LBA range: start 0x0 length 0x400 00:06:11.391 Nvme0n1 : 1.01 1597.06 99.82 0.00 0.00 39293.58 1256.11 31675.73 00:06:11.391 [2024-12-05T12:56:17.691Z] =================================================================================================================== 00:06:11.391 [2024-12-05T12:56:17.691Z] Total : 1597.06 99.82 0.00 0.00 39293.58 1256.11 31675.73 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:11.391 rmmod nvme_tcp 00:06:11.391 rmmod nvme_fabrics 00:06:11.391 rmmod nvme_keyring 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2529965 ']' 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2529965 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2529965 ']' 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2529965 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2529965 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2529965' 00:06:11.391 killing process with pid 2529965 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2529965 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2529965 00:06:11.391 [2024-12-05 13:56:17.664921] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:11.391 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:11.652 13:56:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.566 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:13.566 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:13.566 00:06:13.566 real 0m14.577s 00:06:13.566 user 0m22.989s 00:06:13.566 sys 0m6.771s 00:06:13.566 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.567 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.567 ************************************ 00:06:13.567 END TEST nvmf_host_management 00:06:13.567 ************************************ 00:06:13.567 13:56:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:13.567 13:56:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.567 13:56:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.567 13:56:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.567 ************************************ 00:06:13.567 START TEST nvmf_lvol 00:06:13.567 ************************************ 00:06:13.567 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:13.829 * Looking for test storage... 00:06:13.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:13.829 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.829 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.829 13:56:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:13.829 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.830 --rc genhtml_branch_coverage=1 00:06:13.830 --rc genhtml_function_coverage=1 00:06:13.830 --rc genhtml_legend=1 00:06:13.830 --rc geninfo_all_blocks=1 00:06:13.830 --rc geninfo_unexecuted_blocks=1 00:06:13.830 00:06:13.830 ' 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.830 --rc genhtml_branch_coverage=1 00:06:13.830 --rc genhtml_function_coverage=1 00:06:13.830 --rc genhtml_legend=1 00:06:13.830 --rc geninfo_all_blocks=1 00:06:13.830 --rc geninfo_unexecuted_blocks=1 00:06:13.830 00:06:13.830 ' 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.830 --rc genhtml_branch_coverage=1 00:06:13.830 --rc genhtml_function_coverage=1 00:06:13.830 --rc genhtml_legend=1 00:06:13.830 --rc geninfo_all_blocks=1 00:06:13.830 --rc geninfo_unexecuted_blocks=1 00:06:13.830 00:06:13.830 ' 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.830 --rc genhtml_branch_coverage=1 00:06:13.830 --rc genhtml_function_coverage=1 00:06:13.830 --rc genhtml_legend=1 00:06:13.830 --rc geninfo_all_blocks=1 00:06:13.830 --rc geninfo_unexecuted_blocks=1 00:06:13.830 00:06:13.830 ' 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.830 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:13.831 13:56:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:21.973 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:21.973 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:21.973 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.973 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:21.973 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:06:21.974 00:06:21.974 --- 10.0.0.2 ping statistics --- 00:06:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.974 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:06:21.974 00:06:21.974 --- 10.0.0.1 ping statistics --- 00:06:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.974 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2535173 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2535173 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2535173 ']' 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.974 13:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:21.974 [2024-12-05 13:56:27.585322] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:06:21.974 [2024-12-05 13:56:27.585383] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.974 [2024-12-05 13:56:27.683405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.974 [2024-12-05 13:56:27.736553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.974 [2024-12-05 13:56:27.736605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.974 [2024-12-05 13:56:27.736614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.974 [2024-12-05 13:56:27.736621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.974 [2024-12-05 13:56:27.736627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.974 [2024-12-05 13:56:27.738502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.974 [2024-12-05 13:56:27.738563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.974 [2024-12-05 13:56:27.738592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.235 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:22.495 [2024-12-05 13:56:28.629630] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.495 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:22.755 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:22.755 13:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:23.015 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:23.015 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:23.276 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:23.276 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=376fe1ed-050a-4462-9b8d-7bc45b446357 00:06:23.276 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 376fe1ed-050a-4462-9b8d-7bc45b446357 lvol 20 00:06:23.536 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=347fa0c1-c1d7-490f-94e8-cef680848a24 00:06:23.536 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:23.797 13:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 347fa0c1-c1d7-490f-94e8-cef680848a24 00:06:24.057 13:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:24.057 [2024-12-05 13:56:30.308242] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:24.057 13:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:24.317 13:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2535758 00:06:24.317 13:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:24.317 13:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:25.698 13:56:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 347fa0c1-c1d7-490f-94e8-cef680848a24 MY_SNAPSHOT 00:06:25.698 13:56:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3ad7de47-facd-492b-9538-d7a26486a3d2 00:06:25.698 13:56:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 347fa0c1-c1d7-490f-94e8-cef680848a24 30 00:06:25.698 13:56:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3ad7de47-facd-492b-9538-d7a26486a3d2 MY_CLONE 00:06:25.956 13:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3fc564e8-3dba-4c9f-9a94-b23c1edf5ea0 00:06:25.956 13:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3fc564e8-3dba-4c9f-9a94-b23c1edf5ea0 00:06:26.525 13:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2535758 00:06:34.658 Initializing NVMe Controllers 00:06:34.658 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:34.658 Controller IO queue size 128, less than required. 00:06:34.658 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:34.658 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:34.658 Initialization complete. Launching workers. 00:06:34.658 ======================================================== 00:06:34.658 Latency(us) 00:06:34.658 Device Information : IOPS MiB/s Average min max 00:06:34.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16271.50 63.56 7868.94 1912.29 41692.56 00:06:34.658 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15944.70 62.28 8029.15 1388.92 67352.32 00:06:34.658 ======================================================== 00:06:34.658 Total : 32216.20 125.84 7948.23 1388.92 67352.32 00:06:34.658 00:06:34.918 13:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:34.918 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 347fa0c1-c1d7-490f-94e8-cef680848a24 00:06:35.178 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 376fe1ed-050a-4462-9b8d-7bc45b446357 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.439 rmmod nvme_tcp 00:06:35.439 rmmod nvme_fabrics 00:06:35.439 rmmod nvme_keyring 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2535173 ']' 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2535173 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2535173 ']' 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2535173 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535173 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535173' 00:06:35.439 killing process with pid 2535173 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2535173 00:06:35.439 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2535173 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.699 13:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.611 00:06:37.611 real 0m23.998s 00:06:37.611 user 1m5.296s 00:06:37.611 sys 0m8.674s 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:37.611 ************************************ 00:06:37.611 END TEST nvmf_lvol 00:06:37.611 ************************************ 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.611 13:56:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.873 ************************************ 00:06:37.873 START TEST nvmf_lvs_grow 00:06:37.873 ************************************ 00:06:37.873 13:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:37.873 * Looking for test storage... 00:06:37.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.873 --rc genhtml_branch_coverage=1 00:06:37.873 --rc genhtml_function_coverage=1 00:06:37.873 --rc genhtml_legend=1 00:06:37.873 --rc geninfo_all_blocks=1 00:06:37.873 --rc geninfo_unexecuted_blocks=1 00:06:37.873 00:06:37.873 ' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.873 --rc genhtml_branch_coverage=1 00:06:37.873 --rc genhtml_function_coverage=1 00:06:37.873 --rc genhtml_legend=1 00:06:37.873 --rc geninfo_all_blocks=1 00:06:37.873 --rc geninfo_unexecuted_blocks=1 00:06:37.873 00:06:37.873 ' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.873 --rc genhtml_branch_coverage=1 00:06:37.873 --rc genhtml_function_coverage=1 00:06:37.873 --rc genhtml_legend=1 00:06:37.873 --rc geninfo_all_blocks=1 00:06:37.873 --rc geninfo_unexecuted_blocks=1 00:06:37.873 00:06:37.873 ' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.873 --rc genhtml_branch_coverage=1 00:06:37.873 --rc genhtml_function_coverage=1 00:06:37.873 --rc genhtml_legend=1 00:06:37.873 --rc geninfo_all_blocks=1 00:06:37.873 --rc geninfo_unexecuted_blocks=1 00:06:37.873 00:06:37.873 ' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.873 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.874 13:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:46.016 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:46.016 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:46.016 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.016 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:46.016 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:06:46.017 00:06:46.017 --- 10.0.0.2 ping statistics --- 00:06:46.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.017 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:06:46.017 00:06:46.017 --- 10.0.0.1 ping statistics --- 00:06:46.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.017 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2542261 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2542261 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2542261 ']' 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.017 13:56:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.017 [2024-12-05 13:56:51.679663] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:06:46.017 [2024-12-05 13:56:51.679732] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.017 [2024-12-05 13:56:51.781089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.017 [2024-12-05 13:56:51.832742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.017 [2024-12-05 13:56:51.832794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.017 [2024-12-05 13:56:51.832803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.017 [2024-12-05 13:56:51.832810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.017 [2024-12-05 13:56:51.832816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.017 [2024-12-05 13:56:51.833613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.279 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.544 [2024-12-05 13:56:52.709874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:46.544 ************************************ 00:06:46.544 START TEST lvs_grow_clean 00:06:46.544 ************************************ 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.544 13:56:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:46.805 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:46.805 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:47.066 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:06:47.066 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:06:47.066 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:47.327 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:47.327 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:47.327 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 lvol 150 00:06:47.327 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e305601-d23e-477f-93cf-5e241f683e28 00:06:47.327 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:47.327 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:47.588 [2024-12-05 13:56:53.762011] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:47.588 [2024-12-05 13:56:53.762087] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:47.588 true 00:06:47.588 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:06:47.588 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:47.847 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:47.847 13:56:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:48.107 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e305601-d23e-477f-93cf-5e241f683e28 00:06:48.107 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:48.367 [2024-12-05 13:56:54.500337] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.367 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2542835 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2542835 /var/tmp/bdevperf.sock 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2542835 ']' 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:48.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.628 13:56:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:48.628 [2024-12-05 13:56:54.789300] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:06:48.628 [2024-12-05 13:56:54.789372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542835 ] 00:06:48.628 [2024-12-05 13:56:54.882011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.891 [2024-12-05 13:56:54.934408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.462 13:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.462 13:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:49.462 13:56:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:49.722 Nvme0n1 00:06:49.982 13:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:49.982 [ 00:06:49.982 { 00:06:49.982 "name": "Nvme0n1", 00:06:49.982 "aliases": [ 00:06:49.982 "3e305601-d23e-477f-93cf-5e241f683e28" 00:06:49.982 ], 00:06:49.982 "product_name": "NVMe disk", 00:06:49.982 "block_size": 4096, 00:06:49.982 "num_blocks": 38912, 00:06:49.982 "uuid": "3e305601-d23e-477f-93cf-5e241f683e28", 00:06:49.982 "numa_id": 0, 00:06:49.982 "assigned_rate_limits": { 00:06:49.982 "rw_ios_per_sec": 0, 00:06:49.982 "rw_mbytes_per_sec": 0, 00:06:49.982 "r_mbytes_per_sec": 0, 00:06:49.982 "w_mbytes_per_sec": 0 00:06:49.982 }, 00:06:49.982 "claimed": false, 00:06:49.982 "zoned": false, 00:06:49.982 "supported_io_types": { 00:06:49.982 "read": true, 00:06:49.982 "write": true, 00:06:49.982 "unmap": true, 00:06:49.982 "flush": true, 00:06:49.982 "reset": true, 00:06:49.982 "nvme_admin": true, 00:06:49.982 "nvme_io": true, 00:06:49.982 "nvme_io_md": false, 00:06:49.982 "write_zeroes": true, 00:06:49.982 "zcopy": false, 00:06:49.982 "get_zone_info": false, 00:06:49.982 "zone_management": false, 00:06:49.982 "zone_append": false, 00:06:49.982 "compare": true, 00:06:49.982 "compare_and_write": true, 00:06:49.982 "abort": true, 00:06:49.982 "seek_hole": false, 00:06:49.982 "seek_data": false, 00:06:49.982 "copy": true, 00:06:49.982 "nvme_iov_md": false 00:06:49.982 }, 00:06:49.982 "memory_domains": [ 00:06:49.982 { 00:06:49.982 "dma_device_id": "system", 00:06:49.982 "dma_device_type": 1 00:06:49.982 } 00:06:49.982 ], 00:06:49.982 "driver_specific": { 00:06:49.982 "nvme": [ 00:06:49.982 { 00:06:49.982 "trid": { 00:06:49.982 "trtype": "TCP", 00:06:49.982 "adrfam": "IPv4", 00:06:49.982 "traddr": "10.0.0.2", 00:06:49.982 "trsvcid": "4420", 00:06:49.982 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:49.982 }, 00:06:49.982 "ctrlr_data": { 00:06:49.982 "cntlid": 1, 00:06:49.982 "vendor_id": "0x8086", 00:06:49.982 "model_number": "SPDK bdev Controller", 00:06:49.982 "serial_number": "SPDK0", 00:06:49.982 "firmware_revision": "25.01", 00:06:49.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:49.982 "oacs": { 00:06:49.983 "security": 0, 00:06:49.983 "format": 0, 00:06:49.983 "firmware": 0, 00:06:49.983 "ns_manage": 0 00:06:49.983 }, 00:06:49.983 "multi_ctrlr": true, 00:06:49.983 "ana_reporting": false 00:06:49.983 }, 00:06:49.983 "vs": { 00:06:49.983 "nvme_version": "1.3" 00:06:49.983 }, 00:06:49.983 "ns_data": { 00:06:49.983 "id": 1, 00:06:49.983 "can_share": true 00:06:49.983 } 00:06:49.983 } 00:06:49.983 ], 00:06:49.983 "mp_policy": "active_passive" 00:06:49.983 } 00:06:49.983 } 00:06:49.983 ] 00:06:49.983 13:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:49.983 13:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2543176 00:06:49.983 13:56:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:49.983 Running I/O for 10 seconds... 00:06:51.370 Latency(us) 00:06:51.370 [2024-12-05T12:56:57.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:51.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.370 Nvme0n1 : 1.00 25236.00 98.58 0.00 0.00 0.00 0.00 0.00 00:06:51.370 [2024-12-05T12:56:57.670Z] =================================================================================================================== 00:06:51.370 [2024-12-05T12:56:57.670Z] Total : 25236.00 98.58 0.00 0.00 0.00 0.00 0.00 00:06:51.370 00:06:51.941 13:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:06:52.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.203 Nvme0n1 : 2.00 25395.00 99.20 0.00 0.00 0.00 0.00 0.00 00:06:52.203 [2024-12-05T12:56:58.503Z] =================================================================================================================== 00:06:52.203 [2024-12-05T12:56:58.503Z] Total : 25395.00 99.20 0.00 0.00 0.00 0.00 0.00 00:06:52.203 00:06:52.203 true 00:06:52.203 13:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:06:52.203 13:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:52.463 13:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:52.463 13:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:52.463 13:56:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2543176 00:06:53.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.034 Nvme0n1 : 3.00 25442.33 99.38 0.00 0.00 0.00 0.00 0.00 00:06:53.034 [2024-12-05T12:56:59.334Z] =================================================================================================================== 00:06:53.034 [2024-12-05T12:56:59.334Z] Total : 25442.33 99.38 0.00 0.00 0.00 0.00 0.00 00:06:53.034 00:06:54.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.421 Nvme0n1 : 4.00 25464.25 99.47 0.00 0.00 0.00 0.00 0.00 00:06:54.421 [2024-12-05T12:57:00.721Z] =================================================================================================================== 00:06:54.421 [2024-12-05T12:57:00.721Z] Total : 25464.25 99.47 0.00 0.00 0.00 0.00 0.00 00:06:54.421 00:06:55.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.364 Nvme0n1 : 5.00 25489.40 99.57 0.00 0.00 0.00 0.00 0.00 00:06:55.364 [2024-12-05T12:57:01.664Z] =================================================================================================================== 00:06:55.364 [2024-12-05T12:57:01.664Z] Total : 25489.40 99.57 0.00 0.00 0.00 0.00 0.00 00:06:55.364 00:06:56.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.307 Nvme0n1 : 6.00 25507.83 99.64 0.00 0.00 0.00 0.00 0.00 00:06:56.307 [2024-12-05T12:57:02.607Z] =================================================================================================================== 00:06:56.307 [2024-12-05T12:57:02.607Z] Total : 25507.83 99.64 0.00 0.00 0.00 0.00 0.00 00:06:56.307 00:06:57.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.360 Nvme0n1 : 7.00 25539.00 99.76 0.00 0.00 0.00 0.00 0.00 00:06:57.360 [2024-12-05T12:57:03.660Z] =================================================================================================================== 00:06:57.360 [2024-12-05T12:57:03.660Z] Total : 25539.00 99.76 0.00 0.00 0.00 0.00 0.00 00:06:57.360 00:06:58.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.330 Nvme0n1 : 8.00 25554.50 99.82 0.00 0.00 0.00 0.00 0.00 00:06:58.330 [2024-12-05T12:57:04.630Z] =================================================================================================================== 00:06:58.330 [2024-12-05T12:57:04.630Z] Total : 25554.50 99.82 0.00 0.00 0.00 0.00 0.00 00:06:58.330 00:06:59.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.271 Nvme0n1 : 9.00 25573.67 99.90 0.00 0.00 0.00 0.00 0.00 00:06:59.271 [2024-12-05T12:57:05.571Z] =================================================================================================================== 00:06:59.271 [2024-12-05T12:57:05.571Z] Total : 25573.67 99.90 0.00 0.00 0.00 0.00 0.00 00:06:59.271 00:07:00.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.246 Nvme0n1 : 10.00 25589.00 99.96 0.00 0.00 0.00 0.00 0.00 00:07:00.246 [2024-12-05T12:57:06.546Z] =================================================================================================================== 00:07:00.246 [2024-12-05T12:57:06.546Z] Total : 25589.00 99.96 0.00 0.00 0.00 0.00 0.00 00:07:00.246 00:07:00.246 00:07:00.246 Latency(us) 00:07:00.246 [2024-12-05T12:57:06.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.246 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.246 Nvme0n1 : 10.00 25592.58 99.97 0.00 0.00 4998.01 2525.87 9666.56 00:07:00.246 [2024-12-05T12:57:06.546Z] =================================================================================================================== 00:07:00.246 [2024-12-05T12:57:06.546Z] Total : 25592.58 99.97 0.00 0.00 4998.01 2525.87 9666.56 00:07:00.246 { 00:07:00.246 "results": [ 00:07:00.246 { 00:07:00.246 "job": "Nvme0n1", 00:07:00.246 "core_mask": "0x2", 00:07:00.246 "workload": "randwrite", 00:07:00.246 "status": "finished", 00:07:00.246 "queue_depth": 128, 00:07:00.246 "io_size": 4096, 00:07:00.246 "runtime": 10.003603, 00:07:00.246 "iops": 25592.57899378854, 00:07:00.246 "mibps": 99.97101169448648, 00:07:00.246 "io_failed": 0, 00:07:00.246 "io_timeout": 0, 00:07:00.246 "avg_latency_us": 4998.012337257536, 00:07:00.246 "min_latency_us": 2525.866666666667, 00:07:00.246 "max_latency_us": 9666.56 00:07:00.246 } 00:07:00.246 ], 00:07:00.246 "core_count": 1 00:07:00.246 } 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2542835 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2542835 ']' 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2542835 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542835 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542835' 00:07:00.246 killing process with pid 2542835 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2542835 00:07:00.246 Received shutdown signal, test time was about 10.000000 seconds 00:07:00.246 00:07:00.246 Latency(us) 00:07:00.246 [2024-12-05T12:57:06.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.246 [2024-12-05T12:57:06.546Z] =================================================================================================================== 00:07:00.246 [2024-12-05T12:57:06.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2542835 00:07:00.246 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:00.506 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:00.766 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:00.766 13:57:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:01.026 [2024-12-05 13:57:07.215753] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.026 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:01.287 request: 00:07:01.287 { 00:07:01.287 "uuid": "22f6b498-ed4b-47ca-bab4-5eafda9d5343", 00:07:01.287 "method": "bdev_lvol_get_lvstores", 00:07:01.287 "req_id": 1 00:07:01.287 } 00:07:01.287 Got JSON-RPC error response 00:07:01.287 response: 00:07:01.287 { 00:07:01.287 "code": -19, 00:07:01.287 "message": "No such device" 00:07:01.287 } 00:07:01.287 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:01.287 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.287 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.287 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.287 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:01.548 aio_bdev 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e305601-d23e-477f-93cf-5e241f683e28 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3e305601-d23e-477f-93cf-5e241f683e28 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:01.548 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e305601-d23e-477f-93cf-5e241f683e28 -t 2000 00:07:01.809 [ 00:07:01.809 { 00:07:01.809 "name": "3e305601-d23e-477f-93cf-5e241f683e28", 00:07:01.809 "aliases": [ 00:07:01.809 "lvs/lvol" 00:07:01.809 ], 00:07:01.809 "product_name": "Logical Volume", 00:07:01.809 "block_size": 4096, 00:07:01.809 "num_blocks": 38912, 00:07:01.809 "uuid": "3e305601-d23e-477f-93cf-5e241f683e28", 00:07:01.809 "assigned_rate_limits": { 00:07:01.809 "rw_ios_per_sec": 0, 00:07:01.809 "rw_mbytes_per_sec": 0, 00:07:01.809 "r_mbytes_per_sec": 0, 00:07:01.809 "w_mbytes_per_sec": 0 00:07:01.809 }, 00:07:01.809 "claimed": false, 00:07:01.809 "zoned": false, 00:07:01.809 "supported_io_types": { 00:07:01.809 "read": true, 00:07:01.809 "write": true, 00:07:01.809 "unmap": true, 00:07:01.809 "flush": false, 00:07:01.809 "reset": true, 00:07:01.809 "nvme_admin": false, 00:07:01.809 "nvme_io": false, 00:07:01.809 "nvme_io_md": false, 00:07:01.809 "write_zeroes": true, 00:07:01.809 "zcopy": false, 00:07:01.809 "get_zone_info": false, 00:07:01.809 "zone_management": false, 00:07:01.809 "zone_append": false, 00:07:01.809 "compare": false, 00:07:01.809 "compare_and_write": false, 00:07:01.809 "abort": false, 00:07:01.809 "seek_hole": true, 00:07:01.809 "seek_data": true, 00:07:01.809 "copy": false, 00:07:01.809 "nvme_iov_md": false 00:07:01.809 }, 00:07:01.809 "driver_specific": { 00:07:01.809 "lvol": { 00:07:01.809 "lvol_store_uuid": "22f6b498-ed4b-47ca-bab4-5eafda9d5343", 00:07:01.809 "base_bdev": "aio_bdev", 00:07:01.809 "thin_provision": false, 00:07:01.809 "num_allocated_clusters": 38, 00:07:01.809 "snapshot": false, 00:07:01.809 "clone": false, 00:07:01.809 "esnap_clone": false 00:07:01.809 } 00:07:01.809 } 00:07:01.809 } 00:07:01.809 ] 00:07:01.809 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:01.809 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:01.809 13:57:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:02.069 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:02.069 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:02.069 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:02.069 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:02.069 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e305601-d23e-477f-93cf-5e241f683e28 00:07:02.330 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22f6b498-ed4b-47ca-bab4-5eafda9d5343 00:07:02.591 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:02.591 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.591 00:07:02.591 real 0m16.098s 00:07:02.591 user 0m15.712s 00:07:02.591 sys 0m1.443s 00:07:02.591 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.591 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:02.591 ************************************ 00:07:02.591 END TEST lvs_grow_clean 00:07:02.591 ************************************ 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.852 ************************************ 00:07:02.852 START TEST lvs_grow_dirty 00:07:02.852 ************************************ 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.852 13:57:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:03.114 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:03.114 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:03.114 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:03.114 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:03.114 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:03.405 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:03.405 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:03.405 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 lvol 150 00:07:03.665 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:03.665 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.665 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:03.665 [2024-12-05 13:57:09.823714] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:03.665 [2024-12-05 13:57:09.823755] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:03.665 true 00:07:03.665 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:03.665 13:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:03.927 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:03.927 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.927 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:04.187 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:04.448 [2024-12-05 13:57:10.497649] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2546762 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2546762 /var/tmp/bdevperf.sock 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2546762 ']' 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:04.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.448 13:57:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:04.708 [2024-12-05 13:57:10.746775] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:04.708 [2024-12-05 13:57:10.746833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546762 ] 00:07:04.708 [2024-12-05 13:57:10.830531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.708 [2024-12-05 13:57:10.860299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.278 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.278 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:05.278 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:05.539 Nvme0n1 00:07:05.539 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:05.799 [ 00:07:05.799 { 00:07:05.799 "name": "Nvme0n1", 00:07:05.799 "aliases": [ 00:07:05.799 "80a9b881-6f7d-4205-a2a9-586c25dd508c" 00:07:05.799 ], 00:07:05.799 "product_name": "NVMe disk", 00:07:05.799 "block_size": 4096, 00:07:05.799 "num_blocks": 38912, 00:07:05.799 "uuid": "80a9b881-6f7d-4205-a2a9-586c25dd508c", 00:07:05.799 "numa_id": 0, 00:07:05.799 "assigned_rate_limits": { 00:07:05.799 "rw_ios_per_sec": 0, 00:07:05.799 "rw_mbytes_per_sec": 0, 00:07:05.799 "r_mbytes_per_sec": 0, 00:07:05.799 "w_mbytes_per_sec": 0 00:07:05.799 }, 00:07:05.799 "claimed": false, 00:07:05.799 "zoned": false, 00:07:05.799 "supported_io_types": { 00:07:05.799 "read": true, 00:07:05.799 "write": true, 00:07:05.799 "unmap": true, 00:07:05.799 "flush": true, 00:07:05.799 "reset": true, 00:07:05.799 "nvme_admin": true, 00:07:05.799 "nvme_io": true, 00:07:05.799 "nvme_io_md": false, 00:07:05.799 "write_zeroes": true, 00:07:05.799 "zcopy": false, 00:07:05.799 "get_zone_info": false, 00:07:05.799 "zone_management": false, 00:07:05.799 "zone_append": false, 00:07:05.799 "compare": true, 00:07:05.799 "compare_and_write": true, 00:07:05.799 "abort": true, 00:07:05.799 "seek_hole": false, 00:07:05.799 "seek_data": false, 00:07:05.799 "copy": true, 00:07:05.799 "nvme_iov_md": false 00:07:05.799 }, 00:07:05.799 "memory_domains": [ 00:07:05.799 { 00:07:05.799 "dma_device_id": "system", 00:07:05.799 "dma_device_type": 1 00:07:05.799 } 00:07:05.799 ], 00:07:05.799 "driver_specific": { 00:07:05.799 "nvme": [ 00:07:05.799 { 00:07:05.799 "trid": { 00:07:05.799 "trtype": "TCP", 00:07:05.799 "adrfam": "IPv4", 00:07:05.799 "traddr": "10.0.0.2", 00:07:05.799 "trsvcid": "4420", 00:07:05.799 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:05.799 }, 00:07:05.799 "ctrlr_data": { 00:07:05.799 "cntlid": 1, 00:07:05.799 "vendor_id": "0x8086", 00:07:05.799 "model_number": "SPDK bdev Controller", 00:07:05.799 "serial_number": "SPDK0", 00:07:05.799 "firmware_revision": "25.01", 00:07:05.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:05.799 "oacs": { 00:07:05.799 "security": 0, 00:07:05.799 "format": 0, 00:07:05.799 "firmware": 0, 00:07:05.799 "ns_manage": 0 00:07:05.799 }, 00:07:05.799 "multi_ctrlr": true, 00:07:05.799 "ana_reporting": false 00:07:05.799 }, 00:07:05.799 "vs": { 00:07:05.799 "nvme_version": "1.3" 00:07:05.799 }, 00:07:05.799 "ns_data": { 00:07:05.799 "id": 1, 00:07:05.799 "can_share": true 00:07:05.799 } 00:07:05.799 } 00:07:05.799 ], 00:07:05.799 "mp_policy": "active_passive" 00:07:05.799 } 00:07:05.799 } 00:07:05.799 ] 00:07:05.799 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2546942 00:07:05.799 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:05.799 13:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:05.799 Running I/O for 10 seconds... 00:07:07.181 Latency(us) 00:07:07.181 [2024-12-05T12:57:13.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.181 Nvme0n1 : 1.00 25239.00 98.59 0.00 0.00 0.00 0.00 0.00 00:07:07.181 [2024-12-05T12:57:13.481Z] =================================================================================================================== 00:07:07.181 [2024-12-05T12:57:13.481Z] Total : 25239.00 98.59 0.00 0.00 0.00 0.00 0.00 00:07:07.181 00:07:07.750 13:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:08.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.010 Nvme0n1 : 2.00 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:07:08.010 [2024-12-05T12:57:14.310Z] =================================================================================================================== 00:07:08.010 [2024-12-05T12:57:14.310Z] Total : 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:07:08.010 00:07:08.010 true 00:07:08.010 13:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:08.010 13:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:08.271 13:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:08.271 13:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:08.271 13:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2546942 00:07:08.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.841 Nvme0n1 : 3.00 25415.33 99.28 0.00 0.00 0.00 0.00 0.00 00:07:08.841 [2024-12-05T12:57:15.141Z] =================================================================================================================== 00:07:08.841 [2024-12-05T12:57:15.141Z] Total : 25415.33 99.28 0.00 0.00 0.00 0.00 0.00 00:07:08.841 00:07:10.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.223 Nvme0n1 : 4.00 25465.25 99.47 0.00 0.00 0.00 0.00 0.00 00:07:10.223 [2024-12-05T12:57:16.523Z] =================================================================================================================== 00:07:10.223 [2024-12-05T12:57:16.523Z] Total : 25465.25 99.47 0.00 0.00 0.00 0.00 0.00 00:07:10.223 00:07:10.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.794 Nvme0n1 : 5.00 25501.20 99.61 0.00 0.00 0.00 0.00 0.00 00:07:10.794 [2024-12-05T12:57:17.094Z] =================================================================================================================== 00:07:10.794 [2024-12-05T12:57:17.094Z] Total : 25501.20 99.61 0.00 0.00 0.00 0.00 0.00 00:07:10.794 00:07:12.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.180 Nvme0n1 : 6.00 25528.50 99.72 0.00 0.00 0.00 0.00 0.00 00:07:12.180 [2024-12-05T12:57:18.481Z] =================================================================================================================== 00:07:12.181 [2024-12-05T12:57:18.481Z] Total : 25528.50 99.72 0.00 0.00 0.00 0.00 0.00 00:07:12.181 00:07:13.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.123 Nvme0n1 : 7.00 25547.57 99.80 0.00 0.00 0.00 0.00 0.00 00:07:13.123 [2024-12-05T12:57:19.423Z] =================================================================================================================== 00:07:13.123 [2024-12-05T12:57:19.423Z] Total : 25547.57 99.80 0.00 0.00 0.00 0.00 0.00 00:07:13.123 00:07:14.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.065 Nvme0n1 : 8.00 25562.00 99.85 0.00 0.00 0.00 0.00 0.00 00:07:14.065 [2024-12-05T12:57:20.365Z] =================================================================================================================== 00:07:14.065 [2024-12-05T12:57:20.365Z] Total : 25562.00 99.85 0.00 0.00 0.00 0.00 0.00 00:07:14.065 00:07:15.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.004 Nvme0n1 : 9.00 25573.33 99.90 0.00 0.00 0.00 0.00 0.00 00:07:15.004 [2024-12-05T12:57:21.304Z] =================================================================================================================== 00:07:15.004 [2024-12-05T12:57:21.304Z] Total : 25573.33 99.90 0.00 0.00 0.00 0.00 0.00 00:07:15.004 00:07:15.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.945 Nvme0n1 : 10.00 25582.30 99.93 0.00 0.00 0.00 0.00 0.00 00:07:15.945 [2024-12-05T12:57:22.245Z] =================================================================================================================== 00:07:15.945 [2024-12-05T12:57:22.245Z] Total : 25582.30 99.93 0.00 0.00 0.00 0.00 0.00 00:07:15.945 00:07:15.945 00:07:15.945 Latency(us) 00:07:15.945 [2024-12-05T12:57:22.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.945 Nvme0n1 : 10.00 25586.98 99.95 0.00 0.00 4999.47 3099.31 9448.11 00:07:15.945 [2024-12-05T12:57:22.245Z] =================================================================================================================== 00:07:15.945 [2024-12-05T12:57:22.245Z] Total : 25586.98 99.95 0.00 0.00 4999.47 3099.31 9448.11 00:07:15.945 { 00:07:15.945 "results": [ 00:07:15.945 { 00:07:15.945 "job": "Nvme0n1", 00:07:15.945 "core_mask": "0x2", 00:07:15.945 "workload": "randwrite", 00:07:15.945 "status": "finished", 00:07:15.945 "queue_depth": 128, 00:07:15.945 "io_size": 4096, 00:07:15.945 "runtime": 10.003175, 00:07:15.945 "iops": 25586.97613507711, 00:07:15.945 "mibps": 99.94912552764497, 00:07:15.945 "io_failed": 0, 00:07:15.945 "io_timeout": 0, 00:07:15.945 "avg_latency_us": 4999.4747440460615, 00:07:15.945 "min_latency_us": 3099.306666666667, 00:07:15.945 "max_latency_us": 9448.106666666667 00:07:15.945 } 00:07:15.945 ], 00:07:15.945 "core_count": 1 00:07:15.945 } 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2546762 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2546762 ']' 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2546762 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546762 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546762' 00:07:15.945 killing process with pid 2546762 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2546762 00:07:15.945 Received shutdown signal, test time was about 10.000000 seconds 00:07:15.945 00:07:15.945 Latency(us) 00:07:15.945 [2024-12-05T12:57:22.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.945 [2024-12-05T12:57:22.245Z] =================================================================================================================== 00:07:15.945 [2024-12-05T12:57:22.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:15.945 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2546762 00:07:16.208 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.208 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:16.471 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:16.471 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2542261 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2542261 00:07:16.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2542261 Killed "${NVMF_APP[@]}" "$@" 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2549197 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2549197 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2549197 ']' 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.731 13:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:16.731 [2024-12-05 13:57:22.920697] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:16.731 [2024-12-05 13:57:22.920751] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.731 [2024-12-05 13:57:23.013038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.991 [2024-12-05 13:57:23.042060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.991 [2024-12-05 13:57:23.042085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.991 [2024-12-05 13:57:23.042091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.991 [2024-12-05 13:57:23.042099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.991 [2024-12-05 13:57:23.042104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.991 [2024-12-05 13:57:23.042541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.563 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:17.823 [2024-12-05 13:57:23.909126] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:17.824 [2024-12-05 13:57:23.909201] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:17.824 [2024-12-05 13:57:23.909224] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:17.824 13:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:17.824 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80a9b881-6f7d-4205-a2a9-586c25dd508c -t 2000 00:07:18.085 [ 00:07:18.085 { 00:07:18.085 "name": "80a9b881-6f7d-4205-a2a9-586c25dd508c", 00:07:18.085 "aliases": [ 00:07:18.085 "lvs/lvol" 00:07:18.085 ], 00:07:18.085 "product_name": "Logical Volume", 00:07:18.085 "block_size": 4096, 00:07:18.085 "num_blocks": 38912, 00:07:18.085 "uuid": "80a9b881-6f7d-4205-a2a9-586c25dd508c", 00:07:18.085 "assigned_rate_limits": { 00:07:18.085 "rw_ios_per_sec": 0, 00:07:18.085 "rw_mbytes_per_sec": 0, 00:07:18.085 "r_mbytes_per_sec": 0, 00:07:18.085 "w_mbytes_per_sec": 0 00:07:18.085 }, 00:07:18.085 "claimed": false, 00:07:18.085 "zoned": false, 00:07:18.085 "supported_io_types": { 00:07:18.085 "read": true, 00:07:18.085 "write": true, 00:07:18.085 "unmap": true, 00:07:18.085 "flush": false, 00:07:18.085 "reset": true, 00:07:18.085 "nvme_admin": false, 00:07:18.085 "nvme_io": false, 00:07:18.085 "nvme_io_md": false, 00:07:18.085 "write_zeroes": true, 00:07:18.085 "zcopy": false, 00:07:18.085 "get_zone_info": false, 00:07:18.085 "zone_management": false, 00:07:18.085 "zone_append": false, 00:07:18.085 "compare": false, 00:07:18.085 "compare_and_write": false, 00:07:18.085 "abort": false, 00:07:18.085 "seek_hole": true, 00:07:18.085 "seek_data": true, 00:07:18.085 "copy": false, 00:07:18.085 "nvme_iov_md": false 00:07:18.085 }, 00:07:18.085 "driver_specific": { 00:07:18.085 "lvol": { 00:07:18.085 "lvol_store_uuid": "3b098670-17a7-45b9-b8e1-00c1c894fd64", 00:07:18.085 "base_bdev": "aio_bdev", 00:07:18.085 "thin_provision": false, 00:07:18.085 "num_allocated_clusters": 38, 00:07:18.085 "snapshot": false, 00:07:18.085 "clone": false, 00:07:18.085 "esnap_clone": false 00:07:18.085 } 00:07:18.085 } 00:07:18.085 } 00:07:18.085 ] 00:07:18.085 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:18.085 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:18.085 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:18.347 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:18.347 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:18.347 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:18.347 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:18.347 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:18.607 [2024-12-05 13:57:24.765740] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:18.608 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:18.869 request: 00:07:18.869 { 00:07:18.869 "uuid": "3b098670-17a7-45b9-b8e1-00c1c894fd64", 00:07:18.869 "method": "bdev_lvol_get_lvstores", 00:07:18.869 "req_id": 1 00:07:18.869 } 00:07:18.869 Got JSON-RPC error response 00:07:18.869 response: 00:07:18.869 { 00:07:18.869 "code": -19, 00:07:18.869 "message": "No such device" 00:07:18.869 } 00:07:18.869 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:18.869 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.869 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.869 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.870 13:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.870 aio_bdev 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.870 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:19.133 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80a9b881-6f7d-4205-a2a9-586c25dd508c -t 2000 00:07:19.393 [ 00:07:19.393 { 00:07:19.393 "name": "80a9b881-6f7d-4205-a2a9-586c25dd508c", 00:07:19.393 "aliases": [ 00:07:19.393 "lvs/lvol" 00:07:19.393 ], 00:07:19.393 "product_name": "Logical Volume", 00:07:19.393 "block_size": 4096, 00:07:19.393 "num_blocks": 38912, 00:07:19.393 "uuid": "80a9b881-6f7d-4205-a2a9-586c25dd508c", 00:07:19.393 "assigned_rate_limits": { 00:07:19.393 "rw_ios_per_sec": 0, 00:07:19.393 "rw_mbytes_per_sec": 0, 00:07:19.393 "r_mbytes_per_sec": 0, 00:07:19.393 "w_mbytes_per_sec": 0 00:07:19.393 }, 00:07:19.393 "claimed": false, 00:07:19.393 "zoned": false, 00:07:19.393 "supported_io_types": { 00:07:19.393 "read": true, 00:07:19.393 "write": true, 00:07:19.393 "unmap": true, 00:07:19.393 "flush": false, 00:07:19.393 "reset": true, 00:07:19.393 "nvme_admin": false, 00:07:19.393 "nvme_io": false, 00:07:19.393 "nvme_io_md": false, 00:07:19.393 "write_zeroes": true, 00:07:19.393 "zcopy": false, 00:07:19.393 "get_zone_info": false, 00:07:19.393 "zone_management": false, 00:07:19.393 "zone_append": false, 00:07:19.393 "compare": false, 00:07:19.393 "compare_and_write": false, 00:07:19.393 "abort": false, 00:07:19.393 "seek_hole": true, 00:07:19.393 "seek_data": true, 00:07:19.393 "copy": false, 00:07:19.393 "nvme_iov_md": false 00:07:19.393 }, 00:07:19.393 "driver_specific": { 00:07:19.393 "lvol": { 00:07:19.393 "lvol_store_uuid": "3b098670-17a7-45b9-b8e1-00c1c894fd64", 00:07:19.393 "base_bdev": "aio_bdev", 00:07:19.393 "thin_provision": false, 00:07:19.393 "num_allocated_clusters": 38, 00:07:19.393 "snapshot": false, 00:07:19.393 "clone": false, 00:07:19.393 "esnap_clone": false 00:07:19.393 } 00:07:19.393 } 00:07:19.393 } 00:07:19.393 ] 00:07:19.393 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:19.393 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:19.393 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:19.393 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:19.393 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:19.393 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:19.653 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:19.653 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80a9b881-6f7d-4205-a2a9-586c25dd508c 00:07:19.914 13:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b098670-17a7-45b9-b8e1-00c1c894fd64 00:07:19.914 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.175 00:07:20.175 real 0m17.359s 00:07:20.175 user 0m45.709s 00:07:20.175 sys 0m3.085s 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:20.175 ************************************ 00:07:20.175 END TEST lvs_grow_dirty 00:07:20.175 ************************************ 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:20.175 nvmf_trace.0 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:20.175 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:20.175 rmmod nvme_tcp 00:07:20.175 rmmod nvme_fabrics 00:07:20.175 rmmod nvme_keyring 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2549197 ']' 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2549197 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2549197 ']' 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2549197 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549197 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549197' 00:07:20.436 killing process with pid 2549197 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2549197 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2549197 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.436 13:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.982 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.982 00:07:22.982 real 0m44.812s 00:07:22.982 user 1m7.806s 00:07:22.982 sys 0m10.560s 00:07:22.982 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.982 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 ************************************ 00:07:22.982 END TEST nvmf_lvs_grow 00:07:22.982 ************************************ 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.983 ************************************ 00:07:22.983 START TEST nvmf_bdev_io_wait 00:07:22.983 ************************************ 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:22.983 * Looking for test storage... 00:07:22.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:22.983 13:57:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:22.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.983 --rc genhtml_branch_coverage=1 00:07:22.983 --rc genhtml_function_coverage=1 00:07:22.983 --rc genhtml_legend=1 00:07:22.983 --rc geninfo_all_blocks=1 00:07:22.983 --rc geninfo_unexecuted_blocks=1 00:07:22.983 00:07:22.983 ' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:22.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.983 --rc genhtml_branch_coverage=1 00:07:22.983 --rc genhtml_function_coverage=1 00:07:22.983 --rc genhtml_legend=1 00:07:22.983 --rc geninfo_all_blocks=1 00:07:22.983 --rc geninfo_unexecuted_blocks=1 00:07:22.983 00:07:22.983 ' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:22.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.983 --rc genhtml_branch_coverage=1 00:07:22.983 --rc genhtml_function_coverage=1 00:07:22.983 --rc genhtml_legend=1 00:07:22.983 --rc geninfo_all_blocks=1 00:07:22.983 --rc geninfo_unexecuted_blocks=1 00:07:22.983 00:07:22.983 ' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:22.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.983 --rc genhtml_branch_coverage=1 00:07:22.983 --rc genhtml_function_coverage=1 00:07:22.983 --rc genhtml_legend=1 00:07:22.983 --rc geninfo_all_blocks=1 00:07:22.983 --rc geninfo_unexecuted_blocks=1 00:07:22.983 00:07:22.983 ' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.983 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.984 13:57:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.130 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:31.131 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:31.131 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:31.131 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:31.131 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:07:31.131 00:07:31.131 --- 10.0.0.2 ping statistics --- 00:07:31.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.131 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:07:31.131 00:07:31.131 --- 10.0.0.1 ping statistics --- 00:07:31.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.131 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:31.131 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2554260 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2554260 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2554260 ']' 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.132 13:57:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 [2024-12-05 13:57:36.586914] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:31.132 [2024-12-05 13:57:36.586980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.132 [2024-12-05 13:57:36.688927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.132 [2024-12-05 13:57:36.743300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.132 [2024-12-05 13:57:36.743359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.132 [2024-12-05 13:57:36.743367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.132 [2024-12-05 13:57:36.743375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.132 [2024-12-05 13:57:36.743381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.132 [2024-12-05 13:57:36.745837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.132 [2024-12-05 13:57:36.745997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.132 [2024-12-05 13:57:36.746164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.132 [2024-12-05 13:57:36.746165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.132 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.132 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:31.132 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.132 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.132 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 [2024-12-05 13:57:37.541646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 Malloc0 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:31.395 [2024-12-05 13:57:37.607382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2554318 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2554321 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.395 { 00:07:31.395 "params": { 00:07:31.395 "name": "Nvme$subsystem", 00:07:31.395 "trtype": "$TEST_TRANSPORT", 00:07:31.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.395 "adrfam": "ipv4", 00:07:31.395 "trsvcid": "$NVMF_PORT", 00:07:31.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.395 "hdgst": ${hdgst:-false}, 00:07:31.395 "ddgst": ${ddgst:-false} 00:07:31.395 }, 00:07:31.395 "method": "bdev_nvme_attach_controller" 00:07:31.395 } 00:07:31.395 EOF 00:07:31.395 )") 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2554323 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2554327 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.395 { 00:07:31.395 "params": { 00:07:31.395 "name": "Nvme$subsystem", 00:07:31.395 "trtype": "$TEST_TRANSPORT", 00:07:31.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.395 "adrfam": "ipv4", 00:07:31.395 "trsvcid": "$NVMF_PORT", 00:07:31.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.395 "hdgst": ${hdgst:-false}, 00:07:31.395 "ddgst": ${ddgst:-false} 00:07:31.395 }, 00:07:31.395 "method": "bdev_nvme_attach_controller" 00:07:31.395 } 00:07:31.395 EOF 00:07:31.395 )") 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:31.395 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.395 { 00:07:31.395 "params": { 00:07:31.395 "name": "Nvme$subsystem", 00:07:31.395 "trtype": "$TEST_TRANSPORT", 00:07:31.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.395 "adrfam": "ipv4", 00:07:31.395 "trsvcid": "$NVMF_PORT", 00:07:31.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.395 "hdgst": ${hdgst:-false}, 00:07:31.395 "ddgst": ${ddgst:-false} 00:07:31.395 }, 00:07:31.396 "method": "bdev_nvme_attach_controller" 00:07:31.396 } 00:07:31.396 EOF 00:07:31.396 )") 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:31.396 { 00:07:31.396 "params": { 00:07:31.396 "name": "Nvme$subsystem", 00:07:31.396 "trtype": "$TEST_TRANSPORT", 00:07:31.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:31.396 "adrfam": "ipv4", 00:07:31.396 "trsvcid": "$NVMF_PORT", 00:07:31.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:31.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:31.396 "hdgst": ${hdgst:-false}, 00:07:31.396 "ddgst": ${ddgst:-false} 00:07:31.396 }, 00:07:31.396 "method": "bdev_nvme_attach_controller" 00:07:31.396 } 00:07:31.396 EOF 00:07:31.396 )") 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2554318 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.396 "params": { 00:07:31.396 "name": "Nvme1", 00:07:31.396 "trtype": "tcp", 00:07:31.396 "traddr": "10.0.0.2", 00:07:31.396 "adrfam": "ipv4", 00:07:31.396 "trsvcid": "4420", 00:07:31.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.396 "hdgst": false, 00:07:31.396 "ddgst": false 00:07:31.396 }, 00:07:31.396 "method": "bdev_nvme_attach_controller" 00:07:31.396 }' 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.396 "params": { 00:07:31.396 "name": "Nvme1", 00:07:31.396 "trtype": "tcp", 00:07:31.396 "traddr": "10.0.0.2", 00:07:31.396 "adrfam": "ipv4", 00:07:31.396 "trsvcid": "4420", 00:07:31.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.396 "hdgst": false, 00:07:31.396 "ddgst": false 00:07:31.396 }, 00:07:31.396 "method": "bdev_nvme_attach_controller" 00:07:31.396 }' 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.396 "params": { 00:07:31.396 "name": "Nvme1", 00:07:31.396 "trtype": "tcp", 00:07:31.396 "traddr": "10.0.0.2", 00:07:31.396 "adrfam": "ipv4", 00:07:31.396 "trsvcid": "4420", 00:07:31.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.396 "hdgst": false, 00:07:31.396 "ddgst": false 00:07:31.396 }, 00:07:31.396 "method": "bdev_nvme_attach_controller" 00:07:31.396 }' 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:31.396 13:57:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:31.396 "params": { 00:07:31.396 "name": "Nvme1", 00:07:31.396 "trtype": "tcp", 00:07:31.396 "traddr": "10.0.0.2", 00:07:31.396 "adrfam": "ipv4", 00:07:31.396 "trsvcid": "4420", 00:07:31.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:31.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:31.396 "hdgst": false, 00:07:31.396 "ddgst": false 00:07:31.396 }, 00:07:31.396 "method": "bdev_nvme_attach_controller" 00:07:31.396 }' 00:07:31.396 [2024-12-05 13:57:37.666884] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:31.396 [2024-12-05 13:57:37.666952] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:31.396 [2024-12-05 13:57:37.668247] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:31.396 [2024-12-05 13:57:37.668318] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:31.396 [2024-12-05 13:57:37.672084] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:31.396 [2024-12-05 13:57:37.672146] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:31.396 [2024-12-05 13:57:37.679984] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:31.396 [2024-12-05 13:57:37.680043] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:31.658 [2024-12-05 13:57:37.871573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.658 [2024-12-05 13:57:37.911731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:31.658 [2024-12-05 13:57:37.941824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.920 [2024-12-05 13:57:37.981924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:31.920 [2024-12-05 13:57:38.033369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.920 [2024-12-05 13:57:38.076397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.920 [2024-12-05 13:57:38.103840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.920 [2024-12-05 13:57:38.141938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:32.181 Running I/O for 1 seconds... 00:07:32.181 Running I/O for 1 seconds... 00:07:32.181 Running I/O for 1 seconds... 00:07:32.181 Running I/O for 1 seconds... 00:07:33.125 10793.00 IOPS, 42.16 MiB/s 00:07:33.125 Latency(us) 00:07:33.125 [2024-12-05T12:57:39.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.125 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:33.125 Nvme1n1 : 1.01 10838.10 42.34 0.00 0.00 11762.70 6389.76 18240.85 00:07:33.125 [2024-12-05T12:57:39.425Z] =================================================================================================================== 00:07:33.125 [2024-12-05T12:57:39.425Z] Total : 10838.10 42.34 0.00 0.00 11762.70 6389.76 18240.85 00:07:33.125 180840.00 IOPS, 706.41 MiB/s 00:07:33.125 Latency(us) 00:07:33.125 [2024-12-05T12:57:39.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.125 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:33.125 Nvme1n1 : 1.00 180483.96 705.02 0.00 0.00 705.19 298.67 1966.08 00:07:33.125 [2024-12-05T12:57:39.425Z] =================================================================================================================== 00:07:33.125 [2024-12-05T12:57:39.425Z] Total : 180483.96 705.02 0.00 0.00 705.19 298.67 1966.08 00:07:33.125 10639.00 IOPS, 41.56 MiB/s 00:07:33.125 Latency(us) 00:07:33.125 [2024-12-05T12:57:39.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.125 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:33.125 Nvme1n1 : 1.01 10712.12 41.84 0.00 0.00 11907.99 5106.35 21517.65 00:07:33.125 [2024-12-05T12:57:39.425Z] =================================================================================================================== 00:07:33.125 [2024-12-05T12:57:39.425Z] Total : 10712.12 41.84 0.00 0.00 11907.99 5106.35 21517.65 00:07:33.125 9202.00 IOPS, 35.95 MiB/s 00:07:33.125 Latency(us) 00:07:33.125 [2024-12-05T12:57:39.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.125 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:33.125 Nvme1n1 : 1.01 9278.12 36.24 0.00 0.00 13746.40 4997.12 26432.85 00:07:33.125 [2024-12-05T12:57:39.425Z] =================================================================================================================== 00:07:33.125 [2024-12-05T12:57:39.425Z] Total : 9278.12 36.24 0.00 0.00 13746.40 4997.12 26432.85 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2554321 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2554323 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2554327 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.386 rmmod nvme_tcp 00:07:33.386 rmmod nvme_fabrics 00:07:33.386 rmmod nvme_keyring 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2554260 ']' 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2554260 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2554260 ']' 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2554260 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554260 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554260' 00:07:33.386 killing process with pid 2554260 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2554260 00:07:33.386 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2554260 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.646 13:57:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:36.194 00:07:36.194 real 0m13.055s 00:07:36.194 user 0m19.687s 00:07:36.194 sys 0m7.460s 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:36.194 ************************************ 00:07:36.194 END TEST nvmf_bdev_io_wait 00:07:36.194 ************************************ 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:36.194 ************************************ 00:07:36.194 START TEST nvmf_queue_depth 00:07:36.194 ************************************ 00:07:36.194 13:57:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:36.194 * Looking for test storage... 00:07:36.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.194 --rc genhtml_branch_coverage=1 00:07:36.194 --rc genhtml_function_coverage=1 00:07:36.194 --rc genhtml_legend=1 00:07:36.194 --rc geninfo_all_blocks=1 00:07:36.194 --rc geninfo_unexecuted_blocks=1 00:07:36.194 00:07:36.194 ' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.194 --rc genhtml_branch_coverage=1 00:07:36.194 --rc genhtml_function_coverage=1 00:07:36.194 --rc genhtml_legend=1 00:07:36.194 --rc geninfo_all_blocks=1 00:07:36.194 --rc geninfo_unexecuted_blocks=1 00:07:36.194 00:07:36.194 ' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.194 --rc genhtml_branch_coverage=1 00:07:36.194 --rc genhtml_function_coverage=1 00:07:36.194 --rc genhtml_legend=1 00:07:36.194 --rc geninfo_all_blocks=1 00:07:36.194 --rc geninfo_unexecuted_blocks=1 00:07:36.194 00:07:36.194 ' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.194 --rc genhtml_branch_coverage=1 00:07:36.194 --rc genhtml_function_coverage=1 00:07:36.194 --rc genhtml_legend=1 00:07:36.194 --rc geninfo_all_blocks=1 00:07:36.194 --rc geninfo_unexecuted_blocks=1 00:07:36.194 00:07:36.194 ' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.194 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:36.195 13:57:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:44.337 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:44.338 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:44.338 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:44.338 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:44.338 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:07:44.338 00:07:44.338 --- 10.0.0.2 ping statistics --- 00:07:44.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.338 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:07:44.338 00:07:44.338 --- 10.0.0.1 ping statistics --- 00:07:44.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.338 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:44.338 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2559012 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2559012 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2559012 ']' 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.339 13:57:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.339 [2024-12-05 13:57:49.765358] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:44.339 [2024-12-05 13:57:49.765438] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.339 [2024-12-05 13:57:49.870990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.339 [2024-12-05 13:57:49.922770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.339 [2024-12-05 13:57:49.922823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.339 [2024-12-05 13:57:49.922838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.339 [2024-12-05 13:57:49.922845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.339 [2024-12-05 13:57:49.922851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.339 [2024-12-05 13:57:49.923656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.339 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.339 [2024-12-05 13:57:50.628937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.601 Malloc0 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.601 [2024-12-05 13:57:50.690249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2559360 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2559360 /var/tmp/bdevperf.sock 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2559360 ']' 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.601 13:57:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:44.601 [2024-12-05 13:57:50.748288] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:07:44.601 [2024-12-05 13:57:50.748350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559360 ] 00:07:44.601 [2024-12-05 13:57:50.838245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.601 [2024-12-05 13:57:50.891286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:45.542 NVMe0n1 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.542 13:57:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:45.802 Running I/O for 10 seconds... 00:07:47.685 9195.00 IOPS, 35.92 MiB/s [2024-12-05T12:57:54.926Z] 10245.50 IOPS, 40.02 MiB/s [2024-12-05T12:57:55.891Z] 10916.33 IOPS, 42.64 MiB/s [2024-12-05T12:57:56.921Z] 11195.75 IOPS, 43.73 MiB/s [2024-12-05T12:57:58.301Z] 11668.00 IOPS, 45.58 MiB/s [2024-12-05T12:57:59.239Z] 11940.67 IOPS, 46.64 MiB/s [2024-12-05T12:58:00.177Z] 12136.71 IOPS, 47.41 MiB/s [2024-12-05T12:58:01.113Z] 12265.25 IOPS, 47.91 MiB/s [2024-12-05T12:58:02.049Z] 12365.11 IOPS, 48.30 MiB/s [2024-12-05T12:58:02.049Z] 12463.10 IOPS, 48.68 MiB/s 00:07:55.749 Latency(us) 00:07:55.749 [2024-12-05T12:58:02.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.749 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:55.749 Verification LBA range: start 0x0 length 0x4000 00:07:55.749 NVMe0n1 : 10.07 12480.42 48.75 0.00 0.00 81723.39 25340.59 76458.67 00:07:55.749 [2024-12-05T12:58:02.049Z] =================================================================================================================== 00:07:55.749 [2024-12-05T12:58:02.049Z] Total : 12480.42 48.75 0.00 0.00 81723.39 25340.59 76458.67 00:07:55.749 { 00:07:55.749 "results": [ 00:07:55.749 { 00:07:55.749 "job": "NVMe0n1", 00:07:55.749 "core_mask": "0x1", 00:07:55.749 "workload": "verify", 00:07:55.749 "status": "finished", 00:07:55.749 "verify_range": { 00:07:55.749 "start": 0, 00:07:55.749 "length": 16384 00:07:55.749 }, 00:07:55.749 "queue_depth": 1024, 00:07:55.749 "io_size": 4096, 00:07:55.749 "runtime": 10.065603, 00:07:55.749 "iops": 12480.424670037155, 00:07:55.749 "mibps": 48.75165886733264, 00:07:55.749 "io_failed": 0, 00:07:55.749 "io_timeout": 0, 00:07:55.749 "avg_latency_us": 81723.39191305202, 00:07:55.749 "min_latency_us": 25340.586666666666, 00:07:55.749 "max_latency_us": 76458.66666666667 00:07:55.749 } 00:07:55.749 ], 00:07:55.749 "core_count": 1 00:07:55.749 } 00:07:55.749 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2559360 00:07:55.749 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2559360 ']' 00:07:55.749 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2559360 00:07:55.749 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:55.749 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.749 13:58:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559360 00:07:55.749 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.749 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.749 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559360' 00:07:55.749 killing process with pid 2559360 00:07:55.749 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2559360 00:07:55.749 Received shutdown signal, test time was about 10.000000 seconds 00:07:55.749 00:07:55.749 Latency(us) 00:07:55.749 [2024-12-05T12:58:02.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.749 [2024-12-05T12:58:02.049Z] =================================================================================================================== 00:07:55.749 [2024-12-05T12:58:02.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:55.749 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2559360 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.008 rmmod nvme_tcp 00:07:56.008 rmmod nvme_fabrics 00:07:56.008 rmmod nvme_keyring 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2559012 ']' 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2559012 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2559012 ']' 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2559012 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559012 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559012' 00:07:56.008 killing process with pid 2559012 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2559012 00:07:56.008 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2559012 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.268 13:58:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.176 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:58.176 00:07:58.176 real 0m22.500s 00:07:58.176 user 0m25.830s 00:07:58.176 sys 0m7.035s 00:07:58.176 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.176 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:58.176 ************************************ 00:07:58.176 END TEST nvmf_queue_depth 00:07:58.176 ************************************ 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.435 ************************************ 00:07:58.435 START TEST nvmf_target_multipath 00:07:58.435 ************************************ 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:58.435 * Looking for test storage... 00:07:58.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.435 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.695 --rc genhtml_branch_coverage=1 00:07:58.695 --rc genhtml_function_coverage=1 00:07:58.695 --rc genhtml_legend=1 00:07:58.695 --rc geninfo_all_blocks=1 00:07:58.695 --rc geninfo_unexecuted_blocks=1 00:07:58.695 00:07:58.695 ' 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.695 --rc genhtml_branch_coverage=1 00:07:58.695 --rc genhtml_function_coverage=1 00:07:58.695 --rc genhtml_legend=1 00:07:58.695 --rc geninfo_all_blocks=1 00:07:58.695 --rc geninfo_unexecuted_blocks=1 00:07:58.695 00:07:58.695 ' 00:07:58.695 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:58.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.695 --rc genhtml_branch_coverage=1 00:07:58.696 --rc genhtml_function_coverage=1 00:07:58.696 --rc genhtml_legend=1 00:07:58.696 --rc geninfo_all_blocks=1 00:07:58.696 --rc geninfo_unexecuted_blocks=1 00:07:58.696 00:07:58.696 ' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:58.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.696 --rc genhtml_branch_coverage=1 00:07:58.696 --rc genhtml_function_coverage=1 00:07:58.696 --rc genhtml_legend=1 00:07:58.696 --rc geninfo_all_blocks=1 00:07:58.696 --rc geninfo_unexecuted_blocks=1 00:07:58.696 00:07:58.696 ' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.696 13:58:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:06.848 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:06.848 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.848 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:06.849 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:06.849 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.849 13:58:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:06.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:08:06.849 00:08:06.849 --- 10.0.0.2 ping statistics --- 00:08:06.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.849 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:08:06.849 00:08:06.849 --- 10.0.0.1 ping statistics --- 00:08:06.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.849 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:06.849 only one NIC for nvmf test 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.849 rmmod nvme_tcp 00:08:06.849 rmmod nvme_fabrics 00:08:06.849 rmmod nvme_keyring 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.849 13:58:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:08.235 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:08.236 00:08:08.236 real 0m9.920s 00:08:08.236 user 0m2.171s 00:08:08.236 sys 0m5.698s 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:08.236 ************************************ 00:08:08.236 END TEST nvmf_target_multipath 00:08:08.236 ************************************ 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.236 13:58:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.498 ************************************ 00:08:08.498 START TEST nvmf_zcopy 00:08:08.498 ************************************ 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:08.498 * Looking for test storage... 00:08:08.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.498 --rc genhtml_branch_coverage=1 00:08:08.498 --rc genhtml_function_coverage=1 00:08:08.498 --rc genhtml_legend=1 00:08:08.498 --rc geninfo_all_blocks=1 00:08:08.498 --rc geninfo_unexecuted_blocks=1 00:08:08.498 00:08:08.498 ' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.498 --rc genhtml_branch_coverage=1 00:08:08.498 --rc genhtml_function_coverage=1 00:08:08.498 --rc genhtml_legend=1 00:08:08.498 --rc geninfo_all_blocks=1 00:08:08.498 --rc geninfo_unexecuted_blocks=1 00:08:08.498 00:08:08.498 ' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.498 --rc genhtml_branch_coverage=1 00:08:08.498 --rc genhtml_function_coverage=1 00:08:08.498 --rc genhtml_legend=1 00:08:08.498 --rc geninfo_all_blocks=1 00:08:08.498 --rc geninfo_unexecuted_blocks=1 00:08:08.498 00:08:08.498 ' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:08.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.498 --rc genhtml_branch_coverage=1 00:08:08.498 --rc genhtml_function_coverage=1 00:08:08.498 --rc genhtml_legend=1 00:08:08.498 --rc geninfo_all_blocks=1 00:08:08.498 --rc geninfo_unexecuted_blocks=1 00:08:08.498 00:08:08.498 ' 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.498 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.499 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.760 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:08.760 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:08.760 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:08.760 13:58:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.907 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:16.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:16.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:16.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:16.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.908 13:58:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:16.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:08:16.908 00:08:16.908 --- 10.0.0.2 ping statistics --- 00:08:16.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.908 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:08:16.908 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:08:16.908 00:08:16.908 --- 10.0.0.1 ping statistics --- 00:08:16.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.909 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2570065 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2570065 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2570065 ']' 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.909 13:58:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 [2024-12-05 13:58:22.325246] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:08:16.909 [2024-12-05 13:58:22.325311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.909 [2024-12-05 13:58:22.422758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.909 [2024-12-05 13:58:22.472889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.909 [2024-12-05 13:58:22.472936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.909 [2024-12-05 13:58:22.472944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.909 [2024-12-05 13:58:22.472952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.909 [2024-12-05 13:58:22.472958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.909 [2024-12-05 13:58:22.473700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 [2024-12-05 13:58:23.180653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.909 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.170 [2024-12-05 13:58:23.204900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.170 malloc0 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:17.170 { 00:08:17.170 "params": { 00:08:17.170 "name": "Nvme$subsystem", 00:08:17.170 "trtype": "$TEST_TRANSPORT", 00:08:17.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.170 "adrfam": "ipv4", 00:08:17.170 "trsvcid": "$NVMF_PORT", 00:08:17.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.170 "hdgst": ${hdgst:-false}, 00:08:17.170 "ddgst": ${ddgst:-false} 00:08:17.170 }, 00:08:17.170 "method": "bdev_nvme_attach_controller" 00:08:17.170 } 00:08:17.170 EOF 00:08:17.170 )") 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:17.170 13:58:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:17.170 "params": { 00:08:17.170 "name": "Nvme1", 00:08:17.170 "trtype": "tcp", 00:08:17.170 "traddr": "10.0.0.2", 00:08:17.170 "adrfam": "ipv4", 00:08:17.170 "trsvcid": "4420", 00:08:17.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.170 "hdgst": false, 00:08:17.170 "ddgst": false 00:08:17.170 }, 00:08:17.170 "method": "bdev_nvme_attach_controller" 00:08:17.170 }' 00:08:17.170 [2024-12-05 13:58:23.305760] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:08:17.170 [2024-12-05 13:58:23.305822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570191 ] 00:08:17.170 [2024-12-05 13:58:23.396339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.170 [2024-12-05 13:58:23.450083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.432 Running I/O for 10 seconds... 00:08:19.772 6503.00 IOPS, 50.80 MiB/s [2024-12-05T12:58:26.641Z] 7409.50 IOPS, 57.89 MiB/s [2024-12-05T12:58:28.022Z] 8168.00 IOPS, 63.81 MiB/s [2024-12-05T12:58:28.963Z] 8547.50 IOPS, 66.78 MiB/s [2024-12-05T12:58:29.904Z] 8771.60 IOPS, 68.53 MiB/s [2024-12-05T12:58:30.846Z] 8936.67 IOPS, 69.82 MiB/s [2024-12-05T12:58:31.788Z] 9060.86 IOPS, 70.79 MiB/s [2024-12-05T12:58:32.732Z] 9152.62 IOPS, 71.50 MiB/s [2024-12-05T12:58:33.684Z] 9227.44 IOPS, 72.09 MiB/s [2024-12-05T12:58:33.684Z] 9287.90 IOPS, 72.56 MiB/s 00:08:27.384 Latency(us) 00:08:27.384 [2024-12-05T12:58:33.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.384 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:27.385 Verification LBA range: start 0x0 length 0x1000 00:08:27.385 Nvme1n1 : 10.01 9286.28 72.55 0.00 0.00 13737.11 1508.69 28398.93 00:08:27.385 [2024-12-05T12:58:33.685Z] =================================================================================================================== 00:08:27.385 [2024-12-05T12:58:33.685Z] Total : 9286.28 72.55 0.00 0.00 13737.11 1508.69 28398.93 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2572293 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.647 { 00:08:27.647 "params": { 00:08:27.647 "name": "Nvme$subsystem", 00:08:27.647 "trtype": "$TEST_TRANSPORT", 00:08:27.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.647 "adrfam": "ipv4", 00:08:27.647 "trsvcid": "$NVMF_PORT", 00:08:27.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.647 "hdgst": ${hdgst:-false}, 00:08:27.647 "ddgst": ${ddgst:-false} 00:08:27.647 }, 00:08:27.647 "method": "bdev_nvme_attach_controller" 00:08:27.647 } 00:08:27.647 EOF 00:08:27.647 )") 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:27.647 [2024-12-05 13:58:33.766101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.647 [2024-12-05 13:58:33.766130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:27.647 13:58:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.647 "params": { 00:08:27.648 "name": "Nvme1", 00:08:27.648 "trtype": "tcp", 00:08:27.648 "traddr": "10.0.0.2", 00:08:27.648 "adrfam": "ipv4", 00:08:27.648 "trsvcid": "4420", 00:08:27.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.648 "hdgst": false, 00:08:27.648 "ddgst": false 00:08:27.648 }, 00:08:27.648 "method": "bdev_nvme_attach_controller" 00:08:27.648 }' 00:08:27.648 [2024-12-05 13:58:33.778094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.778103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.790121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.790128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.802151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.802158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.814182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.814190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.818958] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:08:27.648 [2024-12-05 13:58:33.819005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572293 ] 00:08:27.648 [2024-12-05 13:58:33.826211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.826219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.838242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.838249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.850273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.850280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.862302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.862309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.874332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.874340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.886364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.886372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.898393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.898401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.900384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.648 [2024-12-05 13:58:33.910424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.910433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.922459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.922468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.648 [2024-12-05 13:58:33.929939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.648 [2024-12-05 13:58:33.934489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.648 [2024-12-05 13:58:33.934497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:33.946527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:33.946540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:33.958555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:33.958567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:33.970582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:33.970592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:33.982610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:33.982618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:33.994652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:33.994660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.006677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.006692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.018702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.018711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.030733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.030743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.042763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.042770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.054794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.054800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.066824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.066830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.078857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.078866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.090890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.090900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.102920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.102927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.114955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.114968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 Running I/O for 5 seconds... 00:08:27.910 [2024-12-05 13:58:34.130574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.130590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.143413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.143428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.156216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.156232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.168856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.168872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.181692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.181707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.910 [2024-12-05 13:58:34.194729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.910 [2024-12-05 13:58:34.194744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.207983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.207999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.221220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.221236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.234449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.234469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.246903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.246918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.259697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.259712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.273452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.273471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.286737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.286752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.299392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.299407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.171 [2024-12-05 13:58:34.312713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.171 [2024-12-05 13:58:34.312728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.325877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.325892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.339218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.339233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.352525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.352541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.365739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.365754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.379459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.379474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.392727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.392742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.405879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.405893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.419302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.419316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.432265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.432280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.445662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.445676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-05 13:58:34.458911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-05 13:58:34.458926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.471714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.471729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.484863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.484878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.497996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.498010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.511351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.511365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.524243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.524257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.537578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.537593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.433 [2024-12-05 13:58:34.551035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.433 [2024-12-05 13:58:34.551050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.563835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.563850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.576626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.576641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.589307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.589329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.602216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.602231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.615267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.615283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.628764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.628778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.642107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.642122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.655446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.655467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.668774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.668789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.681694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.681708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.695134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.695149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.707449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.707466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.434 [2024-12-05 13:58:34.720625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.434 [2024-12-05 13:58:34.720639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.733417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.733432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.746678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.746692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.759845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.759860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.772689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.772704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.786151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.786166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.799299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.799314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.811997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.812012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.824888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.824903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.838013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.838033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.850947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.850962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.864461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.864476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.877912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.877928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.890747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.890761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.904042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.904057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.917484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.917498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.930523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.930538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.942992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.943007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.955447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.955465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.968163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.968177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.696 [2024-12-05 13:58:34.981522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.696 [2024-12-05 13:58:34.981536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:34.995009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:34.995024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.008060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.008075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.021843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.021858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.034918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.034933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.048249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.048265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.061008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.061024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.073656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.073670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.086997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.087017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.099668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.099683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.112981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.112997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 19232.00 IOPS, 150.25 MiB/s [2024-12-05T12:58:35.259Z] [2024-12-05 13:58:35.126352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.126367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.139841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.139856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.152958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.152973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.166112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.166126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.178529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.178544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.191955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.191970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.205241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.959 [2024-12-05 13:58:35.205255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.959 [2024-12-05 13:58:35.218137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.960 [2024-12-05 13:58:35.218152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.960 [2024-12-05 13:58:35.230772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.960 [2024-12-05 13:58:35.230786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.960 [2024-12-05 13:58:35.243612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.960 [2024-12-05 13:58:35.243626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.220 [2024-12-05 13:58:35.257120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.220 [2024-12-05 13:58:35.257135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.220 [2024-12-05 13:58:35.270470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.220 [2024-12-05 13:58:35.270485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.220 [2024-12-05 13:58:35.282907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.220 [2024-12-05 13:58:35.282922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.220 [2024-12-05 13:58:35.296032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.220 [2024-12-05 13:58:35.296047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.309641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.309656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.322634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.322649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.336354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.336370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.348630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.348645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.362069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.362084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.375297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.375312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.388757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.388772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.401688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.401702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.414993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.415008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.427931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.427946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.441067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.441083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.454341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.454356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.466977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.466992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.480268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.480283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.492986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.493001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.221 [2024-12-05 13:58:35.505573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.221 [2024-12-05 13:58:35.505588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.480 [2024-12-05 13:58:35.518704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.480 [2024-12-05 13:58:35.518719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.480 [2024-12-05 13:58:35.531724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.480 [2024-12-05 13:58:35.531739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.480 [2024-12-05 13:58:35.545277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.480 [2024-12-05 13:58:35.545292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.480 [2024-12-05 13:58:35.558522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.480 [2024-12-05 13:58:35.558537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.480 [2024-12-05 13:58:35.571582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.480 [2024-12-05 13:58:35.571597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.584883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.584897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.598015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.598029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.611073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.611088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.624326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.624342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.636976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.636991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.649601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.649615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.661932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.661947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.675663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.675678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.688234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.688249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.701497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.701512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.714394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.714409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.727888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.727902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.740756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.740771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.753997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.754012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.481 [2024-12-05 13:58:35.766723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.481 [2024-12-05 13:58:35.766738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.779717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.779732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.792178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.792193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.805994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.806009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.819059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.819074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.831970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.831985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.845325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.845340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.858522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.858537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.871887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.871902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.885492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.885506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.898478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.898493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.911701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.911716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.924255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.924270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.937734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.937748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.741 [2024-12-05 13:58:35.950380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.741 [2024-12-05 13:58:35.950395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-12-05 13:58:35.963750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-12-05 13:58:35.963765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-12-05 13:58:35.976500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-12-05 13:58:35.976515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-12-05 13:58:35.989822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-12-05 13:58:35.989837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-12-05 13:58:36.002847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-12-05 13:58:36.002862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-12-05 13:58:36.015790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-12-05 13:58:36.015805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.742 [2024-12-05 13:58:36.028691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.742 [2024-12-05 13:58:36.028706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.003 [2024-12-05 13:58:36.041907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.003 [2024-12-05 13:58:36.041922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.003 [2024-12-05 13:58:36.054762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.003 [2024-12-05 13:58:36.054776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.003 [2024-12-05 13:58:36.067338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.003 [2024-12-05 13:58:36.067352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.003 [2024-12-05 13:58:36.079681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.003 [2024-12-05 13:58:36.079696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.003 [2024-12-05 13:58:36.093077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.003 [2024-12-05 13:58:36.093092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.003 [2024-12-05 13:58:36.106217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.106232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.118885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.118900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 19301.50 IOPS, 150.79 MiB/s [2024-12-05T12:58:36.304Z] [2024-12-05 13:58:36.131392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.131407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.144620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.144634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.157096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.157110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.169516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.169531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.183302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.183317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.195766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.195780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.208928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.208943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.222573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.222587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.235591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.235606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.248642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.248657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.261983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.261997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.275290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.275304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.004 [2024-12-05 13:58:36.288385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.004 [2024-12-05 13:58:36.288400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.301677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.301692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.314618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.314637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.327195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.327209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.340568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.340583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.353944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.353959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.367403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.367418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.380913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.380927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.394210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.394225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.407637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.407652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.420626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.420641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.433801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.433815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.446192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.264 [2024-12-05 13:58:36.446207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.264 [2024-12-05 13:58:36.459321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.459335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.472532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.472546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.485670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.485684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.499231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.499246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.512512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.512526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.526090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.526104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.539313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.539327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-05 13:58:36.552595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-05 13:58:36.552610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.565371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.565390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.578442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.578461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.591589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.591604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.604769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.604784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.618034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.618049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.631324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.631338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.644234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.644249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.657501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.657515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.671137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.671151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.684834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.684849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.697864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.697879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.710303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.710318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.723268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.723283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.736790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.736805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.750145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.750160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.763773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.763788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.776341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.776356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.789594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.789609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.802964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.802979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.526 [2024-12-05 13:58:36.815902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.526 [2024-12-05 13:58:36.815921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.829422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.829437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.842407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.842423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.855104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.855119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.867872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.867886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.879734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.879749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.892837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.892851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.906055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.906070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.919545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.919560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.932762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.932777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.945472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.945487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.958747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.958761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.971871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.971886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.985342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.985357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:36.998226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:36.998241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:37.011247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:37.011262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:37.024744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:37.024759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:37.038291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:37.038306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:37.051263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:37.051279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:37.063723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:37.063738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.788 [2024-12-05 13:58:37.076345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.788 [2024-12-05 13:58:37.076360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.089394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.089409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.101768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.101782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.114264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.114278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 19365.33 IOPS, 151.29 MiB/s [2024-12-05T12:58:37.349Z] [2024-12-05 13:58:37.127646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.127660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.140590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.140605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.153300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.153315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.166416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.166431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.179357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.179372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.192501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.192515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.205580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.205595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.218827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.218842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.231844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.231860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.244854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.244869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.258104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.258119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.271497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.271513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.285308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.285323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.297752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.297766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.310705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.310720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.324216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.324232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.049 [2024-12-05 13:58:37.337285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.049 [2024-12-05 13:58:37.337300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.350365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.350380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.363012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.363027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.375551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.375566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.388013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.388028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.401534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.401550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.413877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.413892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.426826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.426840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.438907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.438921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.452420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.452435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.465638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.465653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.478313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.478328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.491347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.491362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.504947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.504962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.518652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.518666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.531662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.531677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.544540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.544555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.557884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.557899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.571102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.571117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.584076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.584090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.310 [2024-12-05 13:58:37.597448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.310 [2024-12-05 13:58:37.597466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.610326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.610340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.623004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.623019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.636530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.636545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.649187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.649201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.662451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.662470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.675873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.675887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.689262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.689277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.702670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.702684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.716014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.716029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.729139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.729154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.742596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.742611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.756078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.756093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.768819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.768834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.781555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.781570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.794097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.794116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.806414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.806430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.818909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.818924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.832143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.832157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.844818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.844833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.571 [2024-12-05 13:58:37.858022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.571 [2024-12-05 13:58:37.858037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.831 [2024-12-05 13:58:37.871409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.831 [2024-12-05 13:58:37.871424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.831 [2024-12-05 13:58:37.885121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.831 [2024-12-05 13:58:37.885135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.831 [2024-12-05 13:58:37.898653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.898667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.911414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.911428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.924961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.924975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.937955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.937970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.950678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.950692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.963857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.963872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.976649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.976664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:37.989979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:37.989993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.003436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.003451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.016783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.016798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.029706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.029721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.043195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.043214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.056580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.056595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.070033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.070047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.083402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.083417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.096780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.096795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.110091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.110105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.832 [2024-12-05 13:58:38.122531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.832 [2024-12-05 13:58:38.122546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 19373.00 IOPS, 151.35 MiB/s [2024-12-05T12:58:38.393Z] [2024-12-05 13:58:38.135562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.135577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.148955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.148969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.162514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.162528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.175687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.175702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.188587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.188601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.201689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.201703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.213909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.213923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.227431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.227445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.240062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.240076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.253327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.253341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.267142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.267157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.279857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.279872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.292945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.292968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.306144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.306158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.318882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.318897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.332062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.332077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.345212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.345226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.358311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.358326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.371055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.371070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.093 [2024-12-05 13:58:38.384250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.093 [2024-12-05 13:58:38.384265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.396879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.396894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.409933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.409949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.423213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.423227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.436163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.436178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.449695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.449709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.463165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.463181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.476741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.476756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.489787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.489802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.502409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.354 [2024-12-05 13:58:38.502424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.354 [2024-12-05 13:58:38.515743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.515759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.529417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.529432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.542533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.542549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.555636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.555650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.569144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.569158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.582213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.582227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.595809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.595824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.608709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.608724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.621885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.621900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.635366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.635381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.355 [2024-12-05 13:58:38.648088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.355 [2024-12-05 13:58:38.648102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.660639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.660654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.673852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.673866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.686635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.686649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.699949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.699965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.712438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.712458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.725173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.725188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.738672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.738687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.752161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.752176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.615 [2024-12-05 13:58:38.765792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.615 [2024-12-05 13:58:38.765807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.779262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.779277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.792334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.792349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.805276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.805291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.818291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.818305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.831681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.831696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.844989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.845004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.858056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.858071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.871717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.871732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.884876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.884891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.897467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.897482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.616 [2024-12-05 13:58:38.910055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.616 [2024-12-05 13:58:38.910070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:38.923690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:38.923706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:38.936760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:38.936775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:38.950446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:38.950466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:38.963377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:38.963392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:38.976235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:38.976250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:38.989585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:38.989601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.002605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.002620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.015970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.015985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.029156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.029171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.042431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.042446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.055302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.055316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.068496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.068511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.081738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.081752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.094802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.094817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.107613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.107628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.120790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.120805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 19391.20 IOPS, 151.49 MiB/s [2024-12-05T12:58:39.225Z] [2024-12-05 13:58:39.133119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.133134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 00:08:32.925 Latency(us) 00:08:32.925 [2024-12-05T12:58:39.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.925 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:32.925 Nvme1n1 : 5.01 19392.50 151.50 0.00 0.00 6594.27 2594.13 17803.95 00:08:32.925 [2024-12-05T12:58:39.225Z] =================================================================================================================== 00:08:32.925 [2024-12-05T12:58:39.225Z] Total : 19392.50 151.50 0.00 0.00 6594.27 2594.13 17803.95 00:08:32.925 [2024-12-05 13:58:39.142653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.142665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.154689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.154703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.166716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.166727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.178747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.178759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.190774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.190784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.202803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.202812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.925 [2024-12-05 13:58:39.214833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.925 [2024-12-05 13:58:39.214842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.185 [2024-12-05 13:58:39.226866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.185 [2024-12-05 13:58:39.226880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.185 [2024-12-05 13:58:39.238898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:33.185 [2024-12-05 13:58:39.238906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:33.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2572293) - No such process 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2572293 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.185 delay0 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.185 13:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:33.185 [2024-12-05 13:58:39.412587] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:41.321 Initializing NVMe Controllers 00:08:41.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:41.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:41.322 Initialization complete. Launching workers. 00:08:41.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 33587 00:08:41.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33706, failed to submit 118 00:08:41.322 success 33608, unsuccessful 98, failed 0 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:41.322 rmmod nvme_tcp 00:08:41.322 rmmod nvme_fabrics 00:08:41.322 rmmod nvme_keyring 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2570065 ']' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2570065 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2570065 ']' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2570065 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2570065 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2570065' 00:08:41.322 killing process with pid 2570065 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2570065 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2570065 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.322 13:58:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:42.707 00:08:42.707 real 0m34.229s 00:08:42.707 user 0m45.241s 00:08:42.707 sys 0m11.610s 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.707 ************************************ 00:08:42.707 END TEST nvmf_zcopy 00:08:42.707 ************************************ 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.707 ************************************ 00:08:42.707 START TEST nvmf_nmic 00:08:42.707 ************************************ 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.707 * Looking for test storage... 00:08:42.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:42.707 13:58:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.971 --rc genhtml_branch_coverage=1 00:08:42.971 --rc genhtml_function_coverage=1 00:08:42.971 --rc genhtml_legend=1 00:08:42.971 --rc geninfo_all_blocks=1 00:08:42.971 --rc geninfo_unexecuted_blocks=1 00:08:42.971 00:08:42.971 ' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.971 --rc genhtml_branch_coverage=1 00:08:42.971 --rc genhtml_function_coverage=1 00:08:42.971 --rc genhtml_legend=1 00:08:42.971 --rc geninfo_all_blocks=1 00:08:42.971 --rc geninfo_unexecuted_blocks=1 00:08:42.971 00:08:42.971 ' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.971 --rc genhtml_branch_coverage=1 00:08:42.971 --rc genhtml_function_coverage=1 00:08:42.971 --rc genhtml_legend=1 00:08:42.971 --rc geninfo_all_blocks=1 00:08:42.971 --rc geninfo_unexecuted_blocks=1 00:08:42.971 00:08:42.971 ' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:42.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.971 --rc genhtml_branch_coverage=1 00:08:42.971 --rc genhtml_function_coverage=1 00:08:42.971 --rc genhtml_legend=1 00:08:42.971 --rc geninfo_all_blocks=1 00:08:42.971 --rc geninfo_unexecuted_blocks=1 00:08:42.971 00:08:42.971 ' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.971 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:42.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:42.972 13:58:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.145 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:51.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:51.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:51.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:51.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:08:51.146 00:08:51.146 --- 10.0.0.2 ping statistics --- 00:08:51.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.146 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:08:51.146 00:08:51.146 --- 10.0.0.1 ping statistics --- 00:08:51.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.146 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2579128 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2579128 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2579128 ']' 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.146 13:58:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.146 [2024-12-05 13:58:56.603675] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:08:51.147 [2024-12-05 13:58:56.603738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.147 [2024-12-05 13:58:56.701716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.147 [2024-12-05 13:58:56.756869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.147 [2024-12-05 13:58:56.756923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.147 [2024-12-05 13:58:56.756933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.147 [2024-12-05 13:58:56.756940] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.147 [2024-12-05 13:58:56.756946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.147 [2024-12-05 13:58:56.759038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.147 [2024-12-05 13:58:56.759201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.147 [2024-12-05 13:58:56.759362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.147 [2024-12-05 13:58:56.759363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.147 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.147 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:51.147 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:51.147 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.147 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 [2024-12-05 13:58:57.476520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 Malloc0 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 [2024-12-05 13:58:57.552195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:51.482 test case1: single bdev can't be used in multiple subsystems 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.482 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.482 [2024-12-05 13:58:57.588083] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:51.482 [2024-12-05 13:58:57.588110] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:51.482 [2024-12-05 13:58:57.588119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.482 request: 00:08:51.482 { 00:08:51.482 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:51.482 "namespace": { 00:08:51.482 "bdev_name": "Malloc0", 00:08:51.482 "no_auto_visible": false, 00:08:51.482 "hide_metadata": false 00:08:51.482 }, 00:08:51.482 "method": "nvmf_subsystem_add_ns", 00:08:51.482 "req_id": 1 00:08:51.482 } 00:08:51.482 Got JSON-RPC error response 00:08:51.483 response: 00:08:51.483 { 00:08:51.483 "code": -32602, 00:08:51.483 "message": "Invalid parameters" 00:08:51.483 } 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:51.483 Adding namespace failed - expected result. 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:51.483 test case2: host connect to nvmf target in multiple paths 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.483 [2024-12-05 13:58:57.600293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.483 13:58:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:52.868 13:58:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:54.777 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.777 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:54.777 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.777 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:54.777 13:59:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:56.706 13:59:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:56.706 [global] 00:08:56.706 thread=1 00:08:56.706 invalidate=1 00:08:56.706 rw=write 00:08:56.706 time_based=1 00:08:56.706 runtime=1 00:08:56.706 ioengine=libaio 00:08:56.706 direct=1 00:08:56.706 bs=4096 00:08:56.706 iodepth=1 00:08:56.706 norandommap=0 00:08:56.706 numjobs=1 00:08:56.706 00:08:56.706 verify_dump=1 00:08:56.706 verify_backlog=512 00:08:56.706 verify_state_save=0 00:08:56.706 do_verify=1 00:08:56.706 verify=crc32c-intel 00:08:56.706 [job0] 00:08:56.706 filename=/dev/nvme0n1 00:08:56.706 Could not set queue depth (nvme0n1) 00:08:56.967 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:56.967 fio-3.35 00:08:56.967 Starting 1 thread 00:08:58.350 00:08:58.350 job0: (groupid=0, jobs=1): err= 0: pid=2580563: Thu Dec 5 13:59:04 2024 00:08:58.350 read: IOPS=924, BW=3696KiB/s (3785kB/s)(3700KiB/1001msec) 00:08:58.350 slat (nsec): min=6816, max=60081, avg=24403.56, stdev=7335.97 00:08:58.350 clat (usec): min=162, max=1118, avg=709.69, stdev=213.23 00:08:58.350 lat (usec): min=170, max=1145, avg=734.09, stdev=216.06 00:08:58.350 clat percentiles (usec): 00:08:58.350 | 1.00th=[ 289], 5.00th=[ 408], 10.00th=[ 469], 20.00th=[ 537], 00:08:58.350 | 30.00th=[ 594], 40.00th=[ 619], 50.00th=[ 635], 60.00th=[ 660], 00:08:58.350 | 70.00th=[ 922], 80.00th=[ 971], 90.00th=[ 1012], 95.00th=[ 1029], 00:08:58.350 | 99.00th=[ 1074], 99.50th=[ 1090], 99.90th=[ 1123], 99.95th=[ 1123], 00:08:58.350 | 99.99th=[ 1123] 00:08:58.350 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:08:58.350 slat (usec): min=9, max=28783, avg=51.38, stdev=898.83 00:08:58.350 clat (usec): min=88, max=481, avg=249.79, stdev=80.15 00:08:58.350 lat (usec): min=101, max=28966, avg=301.18, stdev=900.73 00:08:58.350 clat percentiles (usec): 00:08:58.350 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 122], 20.00th=[ 141], 00:08:58.350 | 30.00th=[ 215], 40.00th=[ 241], 50.00th=[ 269], 60.00th=[ 289], 00:08:58.350 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 343], 95.00th=[ 363], 00:08:58.350 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 461], 99.95th=[ 482], 00:08:58.350 | 99.99th=[ 482] 00:08:58.350 bw ( KiB/s): min= 3928, max= 4264, per=100.00%, avg=4096.00, stdev=237.59, samples=2 00:08:58.350 iops : min= 982, max= 1066, avg=1024.00, stdev=59.40, samples=2 00:08:58.350 lat (usec) : 100=0.15%, 250=22.06%, 500=36.74%, 750=24.42%, 1000=10.36% 00:08:58.350 lat (msec) : 2=6.26% 00:08:58.350 cpu : usr=3.00%, sys=4.90%, ctx=1952, majf=0, minf=1 00:08:58.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.350 issued rwts: total=925,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.350 00:08:58.350 Run status group 0 (all jobs): 00:08:58.350 READ: bw=3696KiB/s (3785kB/s), 3696KiB/s-3696KiB/s (3785kB/s-3785kB/s), io=3700KiB (3789kB), run=1001-1001msec 00:08:58.350 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:08:58.350 00:08:58.350 Disk stats (read/write): 00:08:58.350 nvme0n1: ios=849/1024, merge=0/0, ticks=1529/254, in_queue=1783, util=98.90% 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.350 rmmod nvme_tcp 00:08:58.350 rmmod nvme_fabrics 00:08:58.350 rmmod nvme_keyring 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2579128 ']' 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2579128 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2579128 ']' 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2579128 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2579128 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2579128' 00:08:58.350 killing process with pid 2579128 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2579128 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2579128 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.350 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.613 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.613 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.613 13:59:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.525 00:09:00.525 real 0m17.852s 00:09:00.525 user 0m45.674s 00:09:00.525 sys 0m6.437s 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.525 ************************************ 00:09:00.525 END TEST nvmf_nmic 00:09:00.525 ************************************ 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.525 ************************************ 00:09:00.525 START TEST nvmf_fio_target 00:09:00.525 ************************************ 00:09:00.525 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:00.787 * Looking for test storage... 00:09:00.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.787 --rc genhtml_branch_coverage=1 00:09:00.787 --rc genhtml_function_coverage=1 00:09:00.787 --rc genhtml_legend=1 00:09:00.787 --rc geninfo_all_blocks=1 00:09:00.787 --rc geninfo_unexecuted_blocks=1 00:09:00.787 00:09:00.787 ' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.787 --rc genhtml_branch_coverage=1 00:09:00.787 --rc genhtml_function_coverage=1 00:09:00.787 --rc genhtml_legend=1 00:09:00.787 --rc geninfo_all_blocks=1 00:09:00.787 --rc geninfo_unexecuted_blocks=1 00:09:00.787 00:09:00.787 ' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.787 --rc genhtml_branch_coverage=1 00:09:00.787 --rc genhtml_function_coverage=1 00:09:00.787 --rc genhtml_legend=1 00:09:00.787 --rc geninfo_all_blocks=1 00:09:00.787 --rc geninfo_unexecuted_blocks=1 00:09:00.787 00:09:00.787 ' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.787 --rc genhtml_branch_coverage=1 00:09:00.787 --rc genhtml_function_coverage=1 00:09:00.787 --rc genhtml_legend=1 00:09:00.787 --rc geninfo_all_blocks=1 00:09:00.787 --rc geninfo_unexecuted_blocks=1 00:09:00.787 00:09:00.787 ' 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.787 13:59:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.787 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.788 13:59:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.934 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.934 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.934 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.934 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:08.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.935 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:08.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:08.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:08.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.936 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:09:08.937 00:09:08.937 --- 10.0.0.2 ping statistics --- 00:09:08.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.937 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:09:08.937 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:09:08.937 00:09:08.937 --- 10.0.0.1 ping statistics --- 00:09:08.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.938 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2585025 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2585025 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2585025 ']' 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.938 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.939 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.939 13:59:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.939 [2024-12-05 13:59:14.522196] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:09:08.939 [2024-12-05 13:59:14.522282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.939 [2024-12-05 13:59:14.622928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.939 [2024-12-05 13:59:14.675593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.939 [2024-12-05 13:59:14.675645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.939 [2024-12-05 13:59:14.675653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.939 [2024-12-05 13:59:14.675661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.939 [2024-12-05 13:59:14.675667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.939 [2024-12-05 13:59:14.677742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.939 [2024-12-05 13:59:14.677903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.939 [2024-12-05 13:59:14.678064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.939 [2024-12-05 13:59:14.678065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.211 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.212 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:09.212 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.212 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.212 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:09.212 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.212 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.471 [2024-12-05 13:59:15.527560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.471 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.731 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:09.731 13:59:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.731 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:09.731 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.992 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:09.992 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.253 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:10.253 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:10.514 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.775 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:10.775 13:59:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.035 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:11.035 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:11.036 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:11.036 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:11.295 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.555 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:11.556 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.816 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:11.816 13:59:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.816 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.076 [2024-12-05 13:59:18.208243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.076 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:12.373 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:12.373 13:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.286 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:14.286 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:14.286 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.286 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:14.286 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:14.286 13:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:16.219 13:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:16.219 [global] 00:09:16.219 thread=1 00:09:16.219 invalidate=1 00:09:16.219 rw=write 00:09:16.219 time_based=1 00:09:16.219 runtime=1 00:09:16.219 ioengine=libaio 00:09:16.219 direct=1 00:09:16.219 bs=4096 00:09:16.219 iodepth=1 00:09:16.219 norandommap=0 00:09:16.219 numjobs=1 00:09:16.219 00:09:16.219 verify_dump=1 00:09:16.219 verify_backlog=512 00:09:16.219 verify_state_save=0 00:09:16.219 do_verify=1 00:09:16.219 verify=crc32c-intel 00:09:16.219 [job0] 00:09:16.219 filename=/dev/nvme0n1 00:09:16.219 [job1] 00:09:16.219 filename=/dev/nvme0n2 00:09:16.219 [job2] 00:09:16.219 filename=/dev/nvme0n3 00:09:16.219 [job3] 00:09:16.219 filename=/dev/nvme0n4 00:09:16.219 Could not set queue depth (nvme0n1) 00:09:16.219 Could not set queue depth (nvme0n2) 00:09:16.219 Could not set queue depth (nvme0n3) 00:09:16.219 Could not set queue depth (nvme0n4) 00:09:16.479 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.479 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.479 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.479 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:16.479 fio-3.35 00:09:16.479 Starting 4 threads 00:09:17.885 00:09:17.885 job0: (groupid=0, jobs=1): err= 0: pid=2586946: Thu Dec 5 13:59:23 2024 00:09:17.885 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:17.885 slat (nsec): min=7992, max=44152, avg=26467.61, stdev=2346.01 00:09:17.885 clat (usec): min=620, max=1234, avg=956.69, stdev=117.78 00:09:17.885 lat (usec): min=647, max=1261, avg=983.16, stdev=117.71 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 685], 5.00th=[ 750], 10.00th=[ 775], 20.00th=[ 857], 00:09:17.885 | 30.00th=[ 906], 40.00th=[ 947], 50.00th=[ 979], 60.00th=[ 1012], 00:09:17.885 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1123], 00:09:17.885 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:17.885 | 99.99th=[ 1237] 00:09:17.885 write: IOPS=728, BW=2913KiB/s (2983kB/s)(2916KiB/1001msec); 0 zone resets 00:09:17.885 slat (nsec): min=9999, max=55740, avg=30813.76, stdev=9766.81 00:09:17.885 clat (usec): min=142, max=1003, avg=637.06, stdev=151.46 00:09:17.885 lat (usec): min=177, max=1055, avg=667.87, stdev=153.11 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 277], 5.00th=[ 396], 10.00th=[ 429], 20.00th=[ 510], 00:09:17.885 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 685], 00:09:17.885 | 70.00th=[ 725], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 865], 00:09:17.885 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1004], 00:09:17.885 | 99.99th=[ 1004] 00:09:17.885 bw ( KiB/s): min= 4096, max= 4096, per=37.88%, avg=4096.00, stdev= 0.00, samples=1 00:09:17.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:17.885 lat (usec) : 250=0.24%, 500=10.80%, 750=35.29%, 1000=36.42% 00:09:17.885 lat (msec) : 2=17.24% 00:09:17.885 cpu : usr=1.60%, sys=4.00%, ctx=1242, majf=0, minf=1 00:09:17.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.885 issued rwts: total=512,729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.885 job1: (groupid=0, jobs=1): err= 0: pid=2586947: Thu Dec 5 13:59:23 2024 00:09:17.885 read: IOPS=17, BW=71.1KiB/s (72.9kB/s)(72.0KiB/1012msec) 00:09:17.885 slat (nsec): min=9973, max=27049, avg=25459.89, stdev=3877.14 00:09:17.885 clat (usec): min=868, max=41994, avg=38932.70, stdev=9506.37 00:09:17.885 lat (usec): min=878, max=42021, avg=38958.16, stdev=9510.23 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 865], 5.00th=[ 865], 10.00th=[40633], 20.00th=[41157], 00:09:17.885 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:17.885 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:09:17.885 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:17.885 | 99.99th=[42206] 00:09:17.885 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:09:17.885 slat (nsec): min=9374, max=67950, avg=31402.31, stdev=8186.88 00:09:17.885 clat (usec): min=201, max=880, avg=567.16, stdev=116.32 00:09:17.885 lat (usec): min=211, max=914, avg=598.56, stdev=118.85 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 281], 5.00th=[ 359], 10.00th=[ 416], 20.00th=[ 469], 00:09:17.885 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:09:17.885 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 750], 00:09:17.885 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 881], 99.95th=[ 881], 00:09:17.885 | 99.99th=[ 881] 00:09:17.885 bw ( KiB/s): min= 4096, max= 4096, per=37.88%, avg=4096.00, stdev= 0.00, samples=1 00:09:17.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:17.885 lat (usec) : 250=0.57%, 500=25.09%, 750=66.42%, 1000=4.72% 00:09:17.885 lat (msec) : 50=3.21% 00:09:17.885 cpu : usr=0.69%, sys=2.37%, ctx=530, majf=0, minf=2 00:09:17.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.885 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.885 job2: (groupid=0, jobs=1): err= 0: pid=2586949: Thu Dec 5 13:59:23 2024 00:09:17.885 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:17.885 slat (nsec): min=7812, max=60180, avg=27056.46, stdev=3347.89 00:09:17.885 clat (usec): min=640, max=1238, avg=1003.64, stdev=95.02 00:09:17.885 lat (usec): min=667, max=1265, avg=1030.69, stdev=95.11 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 701], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 930], 00:09:17.885 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1037], 00:09:17.885 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:09:17.885 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:09:17.885 | 99.99th=[ 1237] 00:09:17.885 write: IOPS=718, BW=2873KiB/s (2942kB/s)(2876KiB/1001msec); 0 zone resets 00:09:17.885 slat (nsec): min=10137, max=65485, avg=32469.87, stdev=8934.60 00:09:17.885 clat (usec): min=237, max=940, avg=610.36, stdev=119.17 00:09:17.885 lat (usec): min=272, max=980, avg=642.83, stdev=121.99 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 318], 5.00th=[ 388], 10.00th=[ 453], 20.00th=[ 515], 00:09:17.885 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:09:17.885 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 799], 00:09:17.885 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 938], 99.95th=[ 938], 00:09:17.885 | 99.99th=[ 938] 00:09:17.885 bw ( KiB/s): min= 4096, max= 4096, per=37.88%, avg=4096.00, stdev= 0.00, samples=1 00:09:17.885 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:17.885 lat (usec) : 250=0.08%, 500=9.99%, 750=42.49%, 1000=23.88% 00:09:17.885 lat (msec) : 2=23.56% 00:09:17.885 cpu : usr=2.10%, sys=3.60%, ctx=1232, majf=0, minf=1 00:09:17.885 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.885 issued rwts: total=512,719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.885 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.885 job3: (groupid=0, jobs=1): err= 0: pid=2586950: Thu Dec 5 13:59:23 2024 00:09:17.885 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:17.885 slat (nsec): min=7496, max=44282, avg=25545.82, stdev=3519.06 00:09:17.885 clat (usec): min=404, max=1300, avg=928.37, stdev=160.22 00:09:17.885 lat (usec): min=430, max=1326, avg=953.92, stdev=160.49 00:09:17.885 clat percentiles (usec): 00:09:17.885 | 1.00th=[ 545], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 783], 00:09:17.886 | 30.00th=[ 807], 40.00th=[ 857], 50.00th=[ 947], 60.00th=[ 1012], 00:09:17.886 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:09:17.886 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:09:17.886 | 99.99th=[ 1303] 00:09:17.886 write: IOPS=775, BW=3101KiB/s (3175kB/s)(3104KiB/1001msec); 0 zone resets 00:09:17.886 slat (nsec): min=9790, max=68579, avg=31894.64, stdev=7591.26 00:09:17.886 clat (usec): min=137, max=990, avg=614.74, stdev=148.03 00:09:17.886 lat (usec): min=148, max=1023, avg=646.64, stdev=149.82 00:09:17.886 clat percentiles (usec): 00:09:17.886 | 1.00th=[ 249], 5.00th=[ 355], 10.00th=[ 429], 20.00th=[ 498], 00:09:17.886 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 619], 60.00th=[ 652], 00:09:17.886 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:09:17.886 | 99.00th=[ 922], 99.50th=[ 947], 99.90th=[ 988], 99.95th=[ 988], 00:09:17.886 | 99.99th=[ 988] 00:09:17.886 bw ( KiB/s): min= 4096, max= 4096, per=37.88%, avg=4096.00, stdev= 0.00, samples=1 00:09:17.886 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:17.886 lat (usec) : 250=0.62%, 500=11.65%, 750=41.46%, 1000=29.81% 00:09:17.886 lat (msec) : 2=16.46% 00:09:17.886 cpu : usr=1.70%, sys=4.10%, ctx=1288, majf=0, minf=1 00:09:17.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:17.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.886 issued rwts: total=512,776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:17.886 00:09:17.886 Run status group 0 (all jobs): 00:09:17.886 READ: bw=6142KiB/s (6290kB/s), 71.1KiB/s-2046KiB/s (72.9kB/s-2095kB/s), io=6216KiB (6365kB), run=1001-1012msec 00:09:17.886 WRITE: bw=10.6MiB/s (11.1MB/s), 2024KiB/s-3101KiB/s (2072kB/s-3175kB/s), io=10.7MiB (11.2MB), run=1001-1012msec 00:09:17.886 00:09:17.886 Disk stats (read/write): 00:09:17.886 nvme0n1: ios=512/512, merge=0/0, ticks=1459/323, in_queue=1782, util=96.39% 00:09:17.886 nvme0n2: ios=47/512, merge=0/0, ticks=541/204, in_queue=745, util=87.54% 00:09:17.886 nvme0n3: ios=501/512, merge=0/0, ticks=1386/298, in_queue=1684, util=96.61% 00:09:17.886 nvme0n4: ios=512/531, merge=0/0, ticks=468/303, in_queue=771, util=89.50% 00:09:17.886 13:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:17.886 [global] 00:09:17.886 thread=1 00:09:17.886 invalidate=1 00:09:17.886 rw=randwrite 00:09:17.886 time_based=1 00:09:17.886 runtime=1 00:09:17.886 ioengine=libaio 00:09:17.886 direct=1 00:09:17.886 bs=4096 00:09:17.886 iodepth=1 00:09:17.886 norandommap=0 00:09:17.886 numjobs=1 00:09:17.886 00:09:17.886 verify_dump=1 00:09:17.886 verify_backlog=512 00:09:17.886 verify_state_save=0 00:09:17.886 do_verify=1 00:09:17.886 verify=crc32c-intel 00:09:17.886 [job0] 00:09:17.886 filename=/dev/nvme0n1 00:09:17.886 [job1] 00:09:17.886 filename=/dev/nvme0n2 00:09:17.886 [job2] 00:09:17.886 filename=/dev/nvme0n3 00:09:17.886 [job3] 00:09:17.886 filename=/dev/nvme0n4 00:09:17.886 Could not set queue depth (nvme0n1) 00:09:17.886 Could not set queue depth (nvme0n2) 00:09:17.886 Could not set queue depth (nvme0n3) 00:09:17.886 Could not set queue depth (nvme0n4) 00:09:18.147 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.147 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.147 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.147 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.147 fio-3.35 00:09:18.147 Starting 4 threads 00:09:19.564 00:09:19.564 job0: (groupid=0, jobs=1): err= 0: pid=2587466: Thu Dec 5 13:59:25 2024 00:09:19.564 read: IOPS=16, BW=65.6KiB/s (67.2kB/s)(68.0KiB/1036msec) 00:09:19.564 slat (nsec): min=10022, max=25745, avg=24585.18, stdev=3754.90 00:09:19.564 clat (usec): min=41081, max=42075, avg=41853.04, stdev=289.03 00:09:19.564 lat (usec): min=41091, max=42100, avg=41877.63, stdev=291.58 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:09:19.564 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:09:19.564 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:19.564 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:19.564 | 99.99th=[42206] 00:09:19.564 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:19.564 slat (nsec): min=9407, max=54207, avg=28532.19, stdev=8578.22 00:09:19.564 clat (usec): min=253, max=939, avg=597.08, stdev=114.77 00:09:19.564 lat (usec): min=264, max=970, avg=625.61, stdev=118.11 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[ 330], 5.00th=[ 396], 10.00th=[ 449], 20.00th=[ 490], 00:09:19.564 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:09:19.564 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 775], 00:09:19.564 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 938], 00:09:19.564 | 99.99th=[ 938] 00:09:19.564 bw ( KiB/s): min= 4096, max= 4096, per=42.76%, avg=4096.00, stdev= 0.00, samples=1 00:09:19.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:19.564 lat (usec) : 500=20.98%, 750=67.67%, 1000=8.13% 00:09:19.564 lat (msec) : 50=3.21% 00:09:19.564 cpu : usr=0.77%, sys=1.35%, ctx=529, majf=0, minf=2 00:09:19.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.564 job1: (groupid=0, jobs=1): err= 0: pid=2587470: Thu Dec 5 13:59:25 2024 00:09:19.564 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:19.564 slat (nsec): min=11397, max=46444, avg=28293.59, stdev=2150.95 00:09:19.564 clat (usec): min=782, max=1183, avg=988.54, stdev=61.40 00:09:19.564 lat (usec): min=811, max=1211, avg=1016.83, stdev=61.08 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[ 816], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 947], 00:09:19.564 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:09:19.564 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1057], 95.00th=[ 1090], 00:09:19.564 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:09:19.564 | 99.99th=[ 1188] 00:09:19.564 write: IOPS=732, BW=2929KiB/s (2999kB/s)(2932KiB/1001msec); 0 zone resets 00:09:19.564 slat (nsec): min=9272, max=69142, avg=31357.27, stdev=10115.56 00:09:19.564 clat (usec): min=250, max=958, avg=606.34, stdev=104.51 00:09:19.564 lat (usec): min=259, max=993, avg=637.70, stdev=109.33 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[ 355], 5.00th=[ 408], 10.00th=[ 461], 20.00th=[ 523], 00:09:19.564 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:09:19.564 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 734], 95.00th=[ 750], 00:09:19.564 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 963], 99.95th=[ 963], 00:09:19.564 | 99.99th=[ 963] 00:09:19.564 bw ( KiB/s): min= 4096, max= 4096, per=42.76%, avg=4096.00, stdev= 0.00, samples=1 00:09:19.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:19.564 lat (usec) : 500=9.32%, 750=46.43%, 1000=26.18% 00:09:19.564 lat (msec) : 2=18.07% 00:09:19.564 cpu : usr=3.00%, sys=4.60%, ctx=1246, majf=0, minf=1 00:09:19.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 issued rwts: total=512,733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.564 job2: (groupid=0, jobs=1): err= 0: pid=2587478: Thu Dec 5 13:59:25 2024 00:09:19.564 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1026msec) 00:09:19.564 slat (nsec): min=27237, max=27945, avg=27519.11, stdev=190.68 00:09:19.564 clat (usec): min=40844, max=41913, avg=41020.93, stdev=224.96 00:09:19.564 lat (usec): min=40871, max=41940, avg=41048.45, stdev=224.91 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:19.564 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:19.564 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:09:19.564 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:19.564 | 99.99th=[41681] 00:09:19.564 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:09:19.564 slat (nsec): min=9950, max=65024, avg=30671.50, stdev=9296.62 00:09:19.564 clat (usec): min=116, max=771, avg=439.34, stdev=87.82 00:09:19.564 lat (usec): min=127, max=806, avg=470.01, stdev=91.40 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[ 231], 5.00th=[ 293], 10.00th=[ 326], 20.00th=[ 363], 00:09:19.564 | 30.00th=[ 400], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 465], 00:09:19.564 | 70.00th=[ 482], 80.00th=[ 506], 90.00th=[ 545], 95.00th=[ 578], 00:09:19.564 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 775], 99.95th=[ 775], 00:09:19.564 | 99.99th=[ 775] 00:09:19.564 bw ( KiB/s): min= 4096, max= 4096, per=42.76%, avg=4096.00, stdev= 0.00, samples=1 00:09:19.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:19.564 lat (usec) : 250=1.32%, 500=74.01%, 750=20.90%, 1000=0.19% 00:09:19.564 lat (msec) : 50=3.58% 00:09:19.564 cpu : usr=1.17%, sys=1.17%, ctx=532, majf=0, minf=1 00:09:19.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.564 job3: (groupid=0, jobs=1): err= 0: pid=2587479: Thu Dec 5 13:59:25 2024 00:09:19.564 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:19.564 slat (nsec): min=27236, max=58512, avg=28201.44, stdev=2730.94 00:09:19.564 clat (usec): min=760, max=1186, avg=995.88, stdev=63.88 00:09:19.564 lat (usec): min=788, max=1214, avg=1024.08, stdev=63.55 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 955], 00:09:19.564 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1012], 00:09:19.564 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1057], 95.00th=[ 1090], 00:09:19.564 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:09:19.564 | 99.99th=[ 1188] 00:09:19.564 write: IOPS=723, BW=2893KiB/s (2963kB/s)(2896KiB/1001msec); 0 zone resets 00:09:19.564 slat (nsec): min=9453, max=54243, avg=30980.59, stdev=9445.67 00:09:19.564 clat (usec): min=289, max=870, avg=610.12, stdev=105.80 00:09:19.564 lat (usec): min=299, max=912, avg=641.10, stdev=110.21 00:09:19.564 clat percentiles (usec): 00:09:19.564 | 1.00th=[ 351], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 523], 00:09:19.564 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:09:19.564 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 758], 00:09:19.564 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 873], 99.95th=[ 873], 00:09:19.564 | 99.99th=[ 873] 00:09:19.564 bw ( KiB/s): min= 4096, max= 4096, per=42.76%, avg=4096.00, stdev= 0.00, samples=1 00:09:19.564 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:19.564 lat (usec) : 500=9.79%, 750=44.50%, 1000=24.35% 00:09:19.564 lat (msec) : 2=21.36% 00:09:19.564 cpu : usr=3.60%, sys=3.90%, ctx=1238, majf=0, minf=1 00:09:19.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.564 issued rwts: total=512,724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.564 00:09:19.564 Run status group 0 (all jobs): 00:09:19.564 READ: bw=4093KiB/s (4191kB/s), 65.6KiB/s-2046KiB/s (67.2kB/s-2095kB/s), io=4240KiB (4342kB), run=1001-1036msec 00:09:19.564 WRITE: bw=9579KiB/s (9809kB/s), 1977KiB/s-2929KiB/s (2024kB/s-2999kB/s), io=9924KiB (10.2MB), run=1001-1036msec 00:09:19.564 00:09:19.564 Disk stats (read/write): 00:09:19.564 nvme0n1: ios=62/512, merge=0/0, ticks=753/295, in_queue=1048, util=90.38% 00:09:19.564 nvme0n2: ios=536/512, merge=0/0, ticks=753/238, in_queue=991, util=96.00% 00:09:19.564 nvme0n3: ios=36/512, merge=0/0, ticks=1488/223, in_queue=1711, util=95.97% 00:09:19.564 nvme0n4: ios=516/512, merge=0/0, ticks=1276/248, in_queue=1524, util=98.07% 00:09:19.564 13:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:19.564 [global] 00:09:19.564 thread=1 00:09:19.564 invalidate=1 00:09:19.564 rw=write 00:09:19.564 time_based=1 00:09:19.564 runtime=1 00:09:19.564 ioengine=libaio 00:09:19.564 direct=1 00:09:19.564 bs=4096 00:09:19.564 iodepth=128 00:09:19.564 norandommap=0 00:09:19.564 numjobs=1 00:09:19.564 00:09:19.564 verify_dump=1 00:09:19.564 verify_backlog=512 00:09:19.564 verify_state_save=0 00:09:19.564 do_verify=1 00:09:19.564 verify=crc32c-intel 00:09:19.564 [job0] 00:09:19.564 filename=/dev/nvme0n1 00:09:19.564 [job1] 00:09:19.565 filename=/dev/nvme0n2 00:09:19.565 [job2] 00:09:19.565 filename=/dev/nvme0n3 00:09:19.565 [job3] 00:09:19.565 filename=/dev/nvme0n4 00:09:19.565 Could not set queue depth (nvme0n1) 00:09:19.565 Could not set queue depth (nvme0n2) 00:09:19.565 Could not set queue depth (nvme0n3) 00:09:19.565 Could not set queue depth (nvme0n4) 00:09:19.832 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:19.832 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:19.832 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:19.832 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:19.832 fio-3.35 00:09:19.832 Starting 4 threads 00:09:21.217 00:09:21.217 job0: (groupid=0, jobs=1): err= 0: pid=2587975: Thu Dec 5 13:59:27 2024 00:09:21.217 read: IOPS=3691, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:09:21.217 slat (nsec): min=999, max=18023k, avg=121275.70, stdev=842558.41 00:09:21.217 clat (usec): min=1379, max=47862, avg=15152.76, stdev=7127.94 00:09:21.217 lat (usec): min=5751, max=47864, avg=15274.04, stdev=7186.11 00:09:21.217 clat percentiles (usec): 00:09:21.217 | 1.00th=[ 5997], 5.00th=[ 6980], 10.00th=[ 7963], 20.00th=[ 9372], 00:09:21.217 | 30.00th=[11207], 40.00th=[13173], 50.00th=[13698], 60.00th=[15008], 00:09:21.217 | 70.00th=[16450], 80.00th=[19006], 90.00th=[22676], 95.00th=[28181], 00:09:21.217 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:09:21.217 | 99.99th=[47973] 00:09:21.217 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:21.218 slat (nsec): min=1725, max=12526k, avg=128865.99, stdev=770052.15 00:09:21.218 clat (usec): min=3089, max=63843, avg=17336.72, stdev=10913.87 00:09:21.218 lat (usec): min=3099, max=63848, avg=17465.59, stdev=10976.74 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 4228], 5.00th=[ 6652], 10.00th=[ 7308], 20.00th=[10159], 00:09:21.218 | 30.00th=[13304], 40.00th=[14091], 50.00th=[15139], 60.00th=[15533], 00:09:21.218 | 70.00th=[18220], 80.00th=[21627], 90.00th=[26608], 95.00th=[40109], 00:09:21.218 | 99.00th=[62653], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:09:21.218 | 99.99th=[63701] 00:09:21.218 bw ( KiB/s): min=16368, max=16384, per=17.35%, avg=16376.00, stdev=11.31, samples=2 00:09:21.218 iops : min= 4092, max= 4096, avg=4094.00, stdev= 2.83, samples=2 00:09:21.218 lat (msec) : 2=0.01%, 4=0.05%, 10=22.52%, 20=57.12%, 50=18.56% 00:09:21.218 lat (msec) : 100=1.73% 00:09:21.218 cpu : usr=3.59%, sys=4.38%, ctx=354, majf=0, minf=1 00:09:21.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:21.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.218 issued rwts: total=3710,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.218 job1: (groupid=0, jobs=1): err= 0: pid=2587995: Thu Dec 5 13:59:27 2024 00:09:21.218 read: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec) 00:09:21.218 slat (nsec): min=931, max=5541.8k, avg=54097.85, stdev=345148.76 00:09:21.218 clat (usec): min=3273, max=12341, avg=7025.09, stdev=1023.28 00:09:21.218 lat (usec): min=3280, max=12370, avg=7079.19, stdev=1063.78 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 4686], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6259], 00:09:21.218 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7308], 00:09:21.218 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8160], 95.00th=[ 8717], 00:09:21.218 | 99.00th=[10290], 99.50th=[10421], 99.90th=[11600], 99.95th=[11994], 00:09:21.218 | 99.99th=[12387] 00:09:21.218 write: IOPS=9336, BW=36.5MiB/s (38.2MB/s)(36.7MiB/1005msec); 0 zone resets 00:09:21.218 slat (nsec): min=1591, max=5199.6k, avg=49071.28, stdev=272222.34 00:09:21.218 clat (usec): min=1335, max=15118, avg=6666.68, stdev=1353.37 00:09:21.218 lat (usec): min=1345, max=15120, avg=6715.76, stdev=1374.33 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 3589], 5.00th=[ 5014], 10.00th=[ 5604], 20.00th=[ 5866], 00:09:21.218 | 30.00th=[ 5997], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6980], 00:09:21.218 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 8225], 00:09:21.218 | 99.00th=[11994], 99.50th=[13698], 99.90th=[14746], 99.95th=[14746], 00:09:21.218 | 99.99th=[15139] 00:09:21.218 bw ( KiB/s): min=34720, max=39320, per=39.21%, avg=37020.00, stdev=3252.69, samples=2 00:09:21.218 iops : min= 8680, max= 9830, avg=9255.00, stdev=813.17, samples=2 00:09:21.218 lat (msec) : 2=0.13%, 4=0.94%, 10=96.91%, 20=2.02% 00:09:21.218 cpu : usr=4.78%, sys=6.77%, ctx=956, majf=0, minf=1 00:09:21.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:21.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.218 issued rwts: total=9216,9383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.218 job2: (groupid=0, jobs=1): err= 0: pid=2588002: Thu Dec 5 13:59:27 2024 00:09:21.218 read: IOPS=3034, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1005msec) 00:09:21.218 slat (nsec): min=1015, max=25812k, avg=157694.91, stdev=1212209.76 00:09:21.218 clat (usec): min=2797, max=49787, avg=18616.55, stdev=7977.56 00:09:21.218 lat (usec): min=4451, max=53441, avg=18774.25, stdev=8084.61 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 6325], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[11600], 00:09:21.218 | 30.00th=[13566], 40.00th=[15139], 50.00th=[18220], 60.00th=[19006], 00:09:21.218 | 70.00th=[22676], 80.00th=[25822], 90.00th=[29230], 95.00th=[33817], 00:09:21.218 | 99.00th=[41157], 99.50th=[42206], 99.90th=[49546], 99.95th=[49546], 00:09:21.218 | 99.99th=[49546] 00:09:21.218 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:21.218 slat (nsec): min=1735, max=34378k, avg=162979.10, stdev=1060444.88 00:09:21.218 clat (usec): min=2611, max=68985, avg=20319.53, stdev=13202.76 00:09:21.218 lat (usec): min=2619, max=69024, avg=20482.51, stdev=13302.38 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 4490], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[12256], 00:09:21.218 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15664], 60.00th=[17695], 00:09:21.218 | 70.00th=[20317], 80.00th=[24773], 90.00th=[40109], 95.00th=[57410], 00:09:21.218 | 99.00th=[63177], 99.50th=[63701], 99.90th=[64226], 99.95th=[68682], 00:09:21.218 | 99.99th=[68682] 00:09:21.218 bw ( KiB/s): min=12272, max=12304, per=13.02%, avg=12288.00, stdev=22.63, samples=2 00:09:21.218 iops : min= 3068, max= 3076, avg=3072.00, stdev= 5.66, samples=2 00:09:21.218 lat (msec) : 4=0.34%, 10=14.46%, 20=51.93%, 50=29.86%, 100=3.41% 00:09:21.218 cpu : usr=2.09%, sys=4.08%, ctx=329, majf=0, minf=1 00:09:21.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:21.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.218 issued rwts: total=3050,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.218 job3: (groupid=0, jobs=1): err= 0: pid=2588003: Thu Dec 5 13:59:27 2024 00:09:21.218 read: IOPS=6272, BW=24.5MiB/s (25.7MB/s)(24.6MiB/1004msec) 00:09:21.218 slat (nsec): min=975, max=37388k, avg=71010.07, stdev=724226.95 00:09:21.218 clat (usec): min=1318, max=55720, avg=10032.60, stdev=5774.97 00:09:21.218 lat (usec): min=1323, max=55748, avg=10103.61, stdev=5826.74 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 1778], 5.00th=[ 5342], 10.00th=[ 6587], 20.00th=[ 7504], 00:09:21.218 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8848], 00:09:21.218 | 70.00th=[ 9634], 80.00th=[11207], 90.00th=[16057], 95.00th=[18482], 00:09:21.218 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:09:21.218 | 99.99th=[55837] 00:09:21.218 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:09:21.218 slat (nsec): min=1594, max=14050k, avg=54770.66, stdev=478180.82 00:09:21.218 clat (usec): min=310, max=69866, avg=8967.21, stdev=7981.33 00:09:21.218 lat (usec): min=343, max=69873, avg=9021.98, stdev=8016.56 00:09:21.218 clat percentiles (usec): 00:09:21.218 | 1.00th=[ 898], 5.00th=[ 2089], 10.00th=[ 3261], 20.00th=[ 4817], 00:09:21.218 | 30.00th=[ 5932], 40.00th=[ 6783], 50.00th=[ 7570], 60.00th=[ 7767], 00:09:21.218 | 70.00th=[ 8291], 80.00th=[10945], 90.00th=[16188], 95.00th=[20579], 00:09:21.218 | 99.00th=[56886], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:09:21.218 | 99.99th=[69731] 00:09:21.218 bw ( KiB/s): min=28672, max=28672, per=30.37%, avg=28672.00, stdev= 0.00, samples=2 00:09:21.218 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:09:21.218 lat (usec) : 500=0.09%, 750=0.33%, 1000=0.18% 00:09:21.218 lat (msec) : 2=2.14%, 4=6.09%, 10=65.83%, 20=20.10%, 50=4.58% 00:09:21.218 lat (msec) : 100=0.65% 00:09:21.218 cpu : usr=6.48%, sys=7.18%, ctx=449, majf=0, minf=2 00:09:21.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:09:21.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.218 issued rwts: total=6298,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.218 00:09:21.218 Run status group 0 (all jobs): 00:09:21.218 READ: bw=86.6MiB/s (90.8MB/s), 11.9MiB/s-35.8MiB/s (12.4MB/s-37.6MB/s), io=87.0MiB (91.2MB), run=1004-1005msec 00:09:21.218 WRITE: bw=92.2MiB/s (96.7MB/s), 11.9MiB/s-36.5MiB/s (12.5MB/s-38.2MB/s), io=92.7MiB (97.2MB), run=1004-1005msec 00:09:21.218 00:09:21.218 Disk stats (read/write): 00:09:21.218 nvme0n1: ios=3098/3245, merge=0/0, ticks=45860/56194, in_queue=102054, util=96.39% 00:09:21.218 nvme0n2: ios=7733/7803, merge=0/0, ticks=30171/26208, in_queue=56379, util=99.59% 00:09:21.218 nvme0n3: ios=2467/2560, merge=0/0, ticks=45713/46576, in_queue=92289, util=98.84% 00:09:21.218 nvme0n4: ios=5160/6144, merge=0/0, ticks=47779/51671, in_queue=99450, util=91.56% 00:09:21.218 13:59:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:21.218 [global] 00:09:21.218 thread=1 00:09:21.218 invalidate=1 00:09:21.218 rw=randwrite 00:09:21.218 time_based=1 00:09:21.218 runtime=1 00:09:21.218 ioengine=libaio 00:09:21.218 direct=1 00:09:21.218 bs=4096 00:09:21.218 iodepth=128 00:09:21.219 norandommap=0 00:09:21.219 numjobs=1 00:09:21.219 00:09:21.219 verify_dump=1 00:09:21.219 verify_backlog=512 00:09:21.219 verify_state_save=0 00:09:21.219 do_verify=1 00:09:21.219 verify=crc32c-intel 00:09:21.219 [job0] 00:09:21.219 filename=/dev/nvme0n1 00:09:21.219 [job1] 00:09:21.219 filename=/dev/nvme0n2 00:09:21.219 [job2] 00:09:21.219 filename=/dev/nvme0n3 00:09:21.219 [job3] 00:09:21.219 filename=/dev/nvme0n4 00:09:21.219 Could not set queue depth (nvme0n1) 00:09:21.219 Could not set queue depth (nvme0n2) 00:09:21.219 Could not set queue depth (nvme0n3) 00:09:21.219 Could not set queue depth (nvme0n4) 00:09:21.480 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.480 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.480 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.480 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:21.480 fio-3.35 00:09:21.480 Starting 4 threads 00:09:22.889 00:09:22.889 job0: (groupid=0, jobs=1): err= 0: pid=2588484: Thu Dec 5 13:59:28 2024 00:09:22.889 read: IOPS=8212, BW=32.1MiB/s (33.6MB/s)(32.3MiB/1006msec) 00:09:22.889 slat (nsec): min=959, max=7679.4k, avg=62001.13, stdev=454165.70 00:09:22.889 clat (usec): min=2733, max=16076, avg=8045.00, stdev=1899.56 00:09:22.889 lat (usec): min=2736, max=16088, avg=8107.00, stdev=1924.23 00:09:22.889 clat percentiles (usec): 00:09:22.889 | 1.00th=[ 3523], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 6652], 00:09:22.889 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8094], 00:09:22.889 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10814], 95.00th=[11863], 00:09:22.889 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14746], 99.95th=[15139], 00:09:22.889 | 99.99th=[16057] 00:09:22.889 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec); 0 zone resets 00:09:22.889 slat (nsec): min=1602, max=18468k, avg=50756.41, stdev=362436.56 00:09:22.889 clat (usec): min=1136, max=24050, avg=7006.66, stdev=2309.79 00:09:22.889 lat (usec): min=1146, max=24059, avg=7057.42, stdev=2326.06 00:09:22.889 clat percentiles (usec): 00:09:22.889 | 1.00th=[ 2540], 5.00th=[ 3851], 10.00th=[ 4359], 20.00th=[ 5866], 00:09:22.889 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7242], 00:09:22.889 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 9110], 00:09:22.889 | 99.00th=[20317], 99.50th=[20841], 99.90th=[23987], 99.95th=[23987], 00:09:22.889 | 99.99th=[23987] 00:09:22.889 bw ( KiB/s): min=33456, max=35720, per=32.13%, avg=34588.00, stdev=1600.89, samples=2 00:09:22.889 iops : min= 8364, max= 8930, avg=8647.00, stdev=400.22, samples=2 00:09:22.889 lat (msec) : 2=0.13%, 4=3.64%, 10=87.42%, 20=8.17%, 50=0.65% 00:09:22.889 cpu : usr=6.07%, sys=8.36%, ctx=820, majf=0, minf=1 00:09:22.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:22.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.889 issued rwts: total=8262,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.889 job1: (groupid=0, jobs=1): err= 0: pid=2588494: Thu Dec 5 13:59:28 2024 00:09:22.889 read: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec) 00:09:22.889 slat (nsec): min=896, max=6871.6k, avg=58651.24, stdev=399663.38 00:09:22.889 clat (usec): min=3505, max=17513, avg=8171.64, stdev=1387.65 00:09:22.889 lat (usec): min=3512, max=20386, avg=8230.29, stdev=1425.27 00:09:22.889 clat percentiles (usec): 00:09:22.889 | 1.00th=[ 4621], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7373], 00:09:22.889 | 30.00th=[ 7570], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8160], 00:09:22.889 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[10159], 95.00th=[10683], 00:09:22.889 | 99.00th=[12518], 99.50th=[13435], 99.90th=[17433], 99.95th=[17433], 00:09:22.890 | 99.99th=[17433] 00:09:22.890 write: IOPS=8204, BW=32.0MiB/s (33.6MB/s)(32.1MiB/1002msec); 0 zone resets 00:09:22.890 slat (nsec): min=1489, max=6857.6k, avg=52300.16, stdev=391666.37 00:09:22.890 clat (usec): min=662, max=14928, avg=7323.55, stdev=1674.21 00:09:22.890 lat (usec): min=1012, max=14947, avg=7375.85, stdev=1697.00 00:09:22.890 clat percentiles (usec): 00:09:22.890 | 1.00th=[ 1516], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 6325], 00:09:22.890 | 30.00th=[ 7046], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:09:22.890 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[10290], 00:09:22.890 | 99.00th=[11863], 99.50th=[12911], 99.90th=[13960], 99.95th=[13960], 00:09:22.890 | 99.99th=[14877] 00:09:22.890 bw ( KiB/s): min=32768, max=32768, per=30.43%, avg=32768.00, stdev= 0.00, samples=2 00:09:22.890 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:09:22.890 lat (usec) : 750=0.01%, 1000=0.01% 00:09:22.890 lat (msec) : 2=0.57%, 4=1.54%, 10=89.24%, 20=8.63% 00:09:22.890 cpu : usr=5.99%, sys=8.29%, ctx=563, majf=0, minf=2 00:09:22.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:22.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.890 issued rwts: total=8192,8221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.890 job2: (groupid=0, jobs=1): err= 0: pid=2588513: Thu Dec 5 13:59:28 2024 00:09:22.890 read: IOPS=5468, BW=21.4MiB/s (22.4MB/s)(21.5MiB/1008msec) 00:09:22.890 slat (nsec): min=984, max=8966.4k, avg=79414.53, stdev=575767.40 00:09:22.890 clat (usec): min=2889, max=31687, avg=11007.36, stdev=3223.88 00:09:22.890 lat (usec): min=3575, max=31695, avg=11086.78, stdev=3264.93 00:09:22.890 clat percentiles (usec): 00:09:22.890 | 1.00th=[ 4883], 5.00th=[ 7635], 10.00th=[ 8291], 20.00th=[ 8979], 00:09:22.890 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:09:22.890 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12780], 95.00th=[16712], 00:09:22.890 | 99.00th=[24511], 99.50th=[29230], 99.90th=[31589], 99.95th=[31589], 00:09:22.890 | 99.99th=[31589] 00:09:22.890 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:09:22.890 slat (nsec): min=1629, max=10711k, avg=89482.57, stdev=594559.83 00:09:22.890 clat (usec): min=1034, max=39834, avg=11884.03, stdev=5908.67 00:09:22.890 lat (usec): min=1039, max=39843, avg=11973.51, stdev=5957.48 00:09:22.890 clat percentiles (usec): 00:09:22.890 | 1.00th=[ 4047], 5.00th=[ 5997], 10.00th=[ 7767], 20.00th=[ 8356], 00:09:22.890 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11338], 00:09:22.890 | 70.00th=[11600], 80.00th=[14091], 90.00th=[18482], 95.00th=[26608], 00:09:22.890 | 99.00th=[35914], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:09:22.890 | 99.99th=[39584] 00:09:22.890 bw ( KiB/s): min=20456, max=24600, per=20.92%, avg=22528.00, stdev=2930.25, samples=2 00:09:22.890 iops : min= 5114, max= 6150, avg=5632.00, stdev=732.56, samples=2 00:09:22.890 lat (msec) : 2=0.18%, 4=0.48%, 10=38.68%, 20=54.90%, 50=5.77% 00:09:22.890 cpu : usr=4.67%, sys=6.06%, ctx=350, majf=0, minf=1 00:09:22.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:22.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.890 issued rwts: total=5512,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.890 job3: (groupid=0, jobs=1): err= 0: pid=2588520: Thu Dec 5 13:59:28 2024 00:09:22.890 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:09:22.890 slat (nsec): min=987, max=11896k, avg=101715.33, stdev=650172.36 00:09:22.890 clat (usec): min=6272, max=43314, avg=11814.19, stdev=5206.13 00:09:22.890 lat (usec): min=6330, max=43323, avg=11915.91, stdev=5269.79 00:09:22.890 clat percentiles (usec): 00:09:22.890 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9372], 00:09:22.890 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 00:09:22.890 | 70.00th=[11207], 80.00th=[11994], 90.00th=[17171], 95.00th=[24249], 00:09:22.890 | 99.00th=[34341], 99.50th=[35390], 99.90th=[43254], 99.95th=[43254], 00:09:22.890 | 99.99th=[43254] 00:09:22.890 write: IOPS=4538, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1008msec); 0 zone resets 00:09:22.890 slat (nsec): min=1601, max=14987k, avg=123254.70, stdev=644860.63 00:09:22.890 clat (usec): min=5357, max=84602, avg=17261.87, stdev=16431.69 00:09:22.890 lat (usec): min=5486, max=84610, avg=17385.12, stdev=16541.32 00:09:22.890 clat percentiles (usec): 00:09:22.890 | 1.00th=[ 6325], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[ 9110], 00:09:22.890 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10421], 00:09:22.890 | 70.00th=[15008], 80.00th=[21627], 90.00th=[35914], 95.00th=[65799], 00:09:22.890 | 99.00th=[80217], 99.50th=[81265], 99.90th=[84411], 99.95th=[84411], 00:09:22.890 | 99.99th=[84411] 00:09:22.890 bw ( KiB/s): min=14464, max=21112, per=16.52%, avg=17788.00, stdev=4700.85, samples=2 00:09:22.890 iops : min= 3616, max= 5278, avg=4447.00, stdev=1175.21, samples=2 00:09:22.890 lat (msec) : 10=53.15%, 20=31.77%, 50=11.67%, 100=3.40% 00:09:22.890 cpu : usr=2.88%, sys=4.07%, ctx=594, majf=0, minf=1 00:09:22.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:22.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:22.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:22.890 issued rwts: total=4096,4575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:22.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:22.890 00:09:22.890 Run status group 0 (all jobs): 00:09:22.890 READ: bw=101MiB/s (106MB/s), 15.9MiB/s-32.1MiB/s (16.6MB/s-33.6MB/s), io=102MiB (107MB), run=1002-1008msec 00:09:22.890 WRITE: bw=105MiB/s (110MB/s), 17.7MiB/s-33.8MiB/s (18.6MB/s-35.4MB/s), io=106MiB (111MB), run=1002-1008msec 00:09:22.890 00:09:22.890 Disk stats (read/write): 00:09:22.890 nvme0n1: ios=7015/7168, merge=0/0, ticks=52847/46116, in_queue=98963, util=90.88% 00:09:22.890 nvme0n2: ios=6706/6946, merge=0/0, ticks=38122/34497, in_queue=72619, util=91.03% 00:09:22.890 nvme0n3: ios=4385/4608, merge=0/0, ticks=36119/43316, in_queue=79435, util=98.31% 00:09:22.890 nvme0n4: ios=3648/3623, merge=0/0, ticks=21947/29859, in_queue=51806, util=99.57% 00:09:22.890 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:22.890 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2588592 00:09:22.890 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:22.890 13:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:22.890 [global] 00:09:22.890 thread=1 00:09:22.890 invalidate=1 00:09:22.890 rw=read 00:09:22.890 time_based=1 00:09:22.890 runtime=10 00:09:22.890 ioengine=libaio 00:09:22.890 direct=1 00:09:22.890 bs=4096 00:09:22.890 iodepth=1 00:09:22.890 norandommap=1 00:09:22.890 numjobs=1 00:09:22.890 00:09:22.890 [job0] 00:09:22.890 filename=/dev/nvme0n1 00:09:22.890 [job1] 00:09:22.890 filename=/dev/nvme0n2 00:09:22.890 [job2] 00:09:22.890 filename=/dev/nvme0n3 00:09:22.890 [job3] 00:09:22.890 filename=/dev/nvme0n4 00:09:22.890 Could not set queue depth (nvme0n1) 00:09:22.890 Could not set queue depth (nvme0n2) 00:09:22.890 Could not set queue depth (nvme0n3) 00:09:22.890 Could not set queue depth (nvme0n4) 00:09:23.152 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.152 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.152 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.152 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.152 fio-3.35 00:09:23.152 Starting 4 threads 00:09:25.693 13:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:25.693 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13832192, buflen=4096 00:09:25.693 fio: pid=2589002, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:25.955 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:25.955 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4247552, buflen=4096 00:09:25.955 fio: pid=2588995, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:25.955 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:25.955 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:26.216 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=737280, buflen=4096 00:09:26.216 fio: pid=2588955, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:26.216 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.216 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:26.477 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.477 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:26.477 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=315392, buflen=4096 00:09:26.477 fio: pid=2588974, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:26.477 00:09:26.477 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2588955: Thu Dec 5 13:59:32 2024 00:09:26.477 read: IOPS=61, BW=245KiB/s (251kB/s)(720KiB/2939msec) 00:09:26.477 slat (usec): min=6, max=245, avg=25.05, stdev=17.74 00:09:26.477 clat (usec): min=475, max=42091, avg=16290.48, stdev=19833.91 00:09:26.477 lat (usec): min=502, max=42117, avg=16315.52, stdev=19836.87 00:09:26.477 clat percentiles (usec): 00:09:26.477 | 1.00th=[ 529], 5.00th=[ 676], 10.00th=[ 766], 20.00th=[ 848], 00:09:26.477 | 30.00th=[ 906], 40.00th=[ 930], 50.00th=[ 988], 60.00th=[ 1057], 00:09:26.477 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:26.477 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:26.477 | 99.99th=[42206] 00:09:26.477 bw ( KiB/s): min= 96, max= 584, per=4.52%, avg=270.40, stdev=238.49, samples=5 00:09:26.477 iops : min= 24, max= 146, avg=67.60, stdev=59.62, samples=5 00:09:26.477 lat (usec) : 500=0.55%, 750=6.63%, 1000=45.86% 00:09:26.477 lat (msec) : 2=8.84%, 50=37.57% 00:09:26.477 cpu : usr=0.14%, sys=0.14%, ctx=182, majf=0, minf=1 00:09:26.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.477 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.477 issued rwts: total=181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.477 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2588974: Thu Dec 5 13:59:32 2024 00:09:26.477 read: IOPS=24, BW=98.5KiB/s (101kB/s)(308KiB/3126msec) 00:09:26.477 slat (usec): min=26, max=4352, avg=82.79, stdev=489.68 00:09:26.477 clat (usec): min=629, max=43165, avg=40500.62, stdev=6548.23 00:09:26.477 lat (usec): min=656, max=45776, avg=40584.04, stdev=6573.06 00:09:26.477 clat percentiles (usec): 00:09:26.477 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:26.477 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:09:26.477 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:26.477 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:09:26.477 | 99.99th=[43254] 00:09:26.477 bw ( KiB/s): min= 95, max= 104, per=1.64%, avg=98.50, stdev= 4.28, samples=6 00:09:26.477 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:09:26.477 lat (usec) : 750=1.28%, 1000=1.28% 00:09:26.477 lat (msec) : 50=96.15% 00:09:26.477 cpu : usr=0.00%, sys=0.10%, ctx=81, majf=0, minf=2 00:09:26.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.477 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.477 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.477 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2588995: Thu Dec 5 13:59:32 2024 00:09:26.477 read: IOPS=378, BW=1513KiB/s (1550kB/s)(4148KiB/2741msec) 00:09:26.477 slat (nsec): min=6831, max=62567, avg=27531.77, stdev=3571.72 00:09:26.477 clat (usec): min=527, max=42046, avg=2607.82, stdev=7831.34 00:09:26.477 lat (usec): min=554, max=42074, avg=2635.36, stdev=7831.33 00:09:26.477 clat percentiles (usec): 00:09:26.477 | 1.00th=[ 717], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 930], 00:09:26.477 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1045], 60.00th=[ 1074], 00:09:26.477 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1221], 00:09:26.477 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:26.477 | 99.99th=[42206] 00:09:26.477 bw ( KiB/s): min= 872, max= 2744, per=26.60%, avg=1590.40, stdev=781.96, samples=5 00:09:26.477 iops : min= 218, max= 686, avg=397.60, stdev=195.49, samples=5 00:09:26.477 lat (usec) : 750=1.93%, 1000=38.15% 00:09:26.477 lat (msec) : 2=55.88%, 50=3.95% 00:09:26.477 cpu : usr=0.80%, sys=1.42%, ctx=1038, majf=0, minf=2 00:09:26.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.477 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.477 issued rwts: total=1038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.477 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2589002: Thu Dec 5 13:59:32 2024 00:09:26.477 read: IOPS=1317, BW=5270KiB/s (5397kB/s)(13.2MiB/2563msec) 00:09:26.477 slat (nsec): min=6424, max=62140, avg=23561.53, stdev=7306.39 00:09:26.477 clat (usec): min=206, max=4398, avg=729.42, stdev=140.61 00:09:26.477 lat (usec): min=214, max=4424, avg=752.98, stdev=142.28 00:09:26.477 clat percentiles (usec): 00:09:26.477 | 1.00th=[ 363], 5.00th=[ 490], 10.00th=[ 570], 20.00th=[ 627], 00:09:26.477 | 30.00th=[ 676], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 766], 00:09:26.477 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 889], 00:09:26.477 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1467], 99.95th=[ 2114], 00:09:26.477 | 99.99th=[ 4424] 00:09:26.477 bw ( KiB/s): min= 5192, max= 5376, per=88.31%, avg=5278.40, stdev=90.76, samples=5 00:09:26.477 iops : min= 1298, max= 1344, avg=1319.60, stdev=22.69, samples=5 00:09:26.477 lat (usec) : 250=0.09%, 500=5.51%, 750=48.55%, 1000=45.62% 00:09:26.477 lat (msec) : 2=0.15%, 4=0.03%, 10=0.03% 00:09:26.477 cpu : usr=1.60%, sys=3.32%, ctx=3378, majf=0, minf=2 00:09:26.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:26.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.478 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.478 issued rwts: total=3378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:26.478 00:09:26.478 Run status group 0 (all jobs): 00:09:26.478 READ: bw=5977KiB/s (6120kB/s), 98.5KiB/s-5270KiB/s (101kB/s-5397kB/s), io=18.2MiB (19.1MB), run=2563-3126msec 00:09:26.478 00:09:26.478 Disk stats (read/write): 00:09:26.478 nvme0n1: ios=177/0, merge=0/0, ticks=2809/0, in_queue=2809, util=94.76% 00:09:26.478 nvme0n2: ios=105/0, merge=0/0, ticks=3834/0, in_queue=3834, util=99.91% 00:09:26.478 nvme0n3: ios=1008/0, merge=0/0, ticks=2435/0, in_queue=2435, util=95.99% 00:09:26.478 nvme0n4: ios=3075/0, merge=0/0, ticks=2180/0, in_queue=2180, util=96.06% 00:09:26.478 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.478 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:26.738 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.738 13:59:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:26.999 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.999 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:26.999 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:26.999 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2588592 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:27.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:27.260 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:27.520 nvmf hotplug test: fio failed as expected 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.520 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.520 rmmod nvme_tcp 00:09:27.520 rmmod nvme_fabrics 00:09:27.520 rmmod nvme_keyring 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2585025 ']' 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2585025 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2585025 ']' 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2585025 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2585025 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2585025' 00:09:27.780 killing process with pid 2585025 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2585025 00:09:27.780 13:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2585025 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.780 13:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:30.361 00:09:30.361 real 0m29.293s 00:09:30.361 user 2m40.605s 00:09:30.361 sys 0m9.451s 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.361 ************************************ 00:09:30.361 END TEST nvmf_fio_target 00:09:30.361 ************************************ 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.361 ************************************ 00:09:30.361 START TEST nvmf_bdevio 00:09:30.361 ************************************ 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:30.361 * Looking for test storage... 00:09:30.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.361 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.362 --rc genhtml_branch_coverage=1 00:09:30.362 --rc genhtml_function_coverage=1 00:09:30.362 --rc genhtml_legend=1 00:09:30.362 --rc geninfo_all_blocks=1 00:09:30.362 --rc geninfo_unexecuted_blocks=1 00:09:30.362 00:09:30.362 ' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.362 --rc genhtml_branch_coverage=1 00:09:30.362 --rc genhtml_function_coverage=1 00:09:30.362 --rc genhtml_legend=1 00:09:30.362 --rc geninfo_all_blocks=1 00:09:30.362 --rc geninfo_unexecuted_blocks=1 00:09:30.362 00:09:30.362 ' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.362 --rc genhtml_branch_coverage=1 00:09:30.362 --rc genhtml_function_coverage=1 00:09:30.362 --rc genhtml_legend=1 00:09:30.362 --rc geninfo_all_blocks=1 00:09:30.362 --rc geninfo_unexecuted_blocks=1 00:09:30.362 00:09:30.362 ' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.362 --rc genhtml_branch_coverage=1 00:09:30.362 --rc genhtml_function_coverage=1 00:09:30.362 --rc genhtml_legend=1 00:09:30.362 --rc geninfo_all_blocks=1 00:09:30.362 --rc geninfo_unexecuted_blocks=1 00:09:30.362 00:09:30.362 ' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.362 13:59:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.495 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:38.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:38.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:38.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:38.496 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:09:38.496 00:09:38.496 --- 10.0.0.2 ping statistics --- 00:09:38.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.496 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:09:38.496 00:09:38.496 --- 10.0.0.1 ping statistics --- 00:09:38.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.496 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2594081 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2594081 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2594081 ']' 00:09:38.496 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.497 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.497 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.497 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.497 13:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 [2024-12-05 13:59:43.782072] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:09:38.497 [2024-12-05 13:59:43.782124] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.497 [2024-12-05 13:59:43.876674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.497 [2024-12-05 13:59:43.923367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.497 [2024-12-05 13:59:43.923421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.497 [2024-12-05 13:59:43.923430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.497 [2024-12-05 13:59:43.923437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.497 [2024-12-05 13:59:43.923443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.497 [2024-12-05 13:59:43.925488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.497 [2024-12-05 13:59:43.925624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:38.497 [2024-12-05 13:59:43.925907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:38.497 [2024-12-05 13:59:43.925910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 [2024-12-05 13:59:44.654851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 Malloc0 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:38.497 [2024-12-05 13:59:44.734298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:38.497 { 00:09:38.497 "params": { 00:09:38.497 "name": "Nvme$subsystem", 00:09:38.497 "trtype": "$TEST_TRANSPORT", 00:09:38.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:38.497 "adrfam": "ipv4", 00:09:38.497 "trsvcid": "$NVMF_PORT", 00:09:38.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:38.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:38.497 "hdgst": ${hdgst:-false}, 00:09:38.497 "ddgst": ${ddgst:-false} 00:09:38.497 }, 00:09:38.497 "method": "bdev_nvme_attach_controller" 00:09:38.497 } 00:09:38.497 EOF 00:09:38.497 )") 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:38.497 13:59:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:38.497 "params": { 00:09:38.497 "name": "Nvme1", 00:09:38.497 "trtype": "tcp", 00:09:38.497 "traddr": "10.0.0.2", 00:09:38.497 "adrfam": "ipv4", 00:09:38.497 "trsvcid": "4420", 00:09:38.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:38.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:38.497 "hdgst": false, 00:09:38.497 "ddgst": false 00:09:38.497 }, 00:09:38.497 "method": "bdev_nvme_attach_controller" 00:09:38.497 }' 00:09:38.758 [2024-12-05 13:59:44.791934] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:09:38.758 [2024-12-05 13:59:44.791998] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594283 ] 00:09:38.758 [2024-12-05 13:59:44.884842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:38.758 [2024-12-05 13:59:44.942165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.758 [2024-12-05 13:59:44.942328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.758 [2024-12-05 13:59:44.942330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.017 I/O targets: 00:09:39.017 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:39.017 00:09:39.017 00:09:39.017 CUnit - A unit testing framework for C - Version 2.1-3 00:09:39.017 http://cunit.sourceforge.net/ 00:09:39.017 00:09:39.017 00:09:39.017 Suite: bdevio tests on: Nvme1n1 00:09:39.277 Test: blockdev write read block ...passed 00:09:39.277 Test: blockdev write zeroes read block ...passed 00:09:39.277 Test: blockdev write zeroes read no split ...passed 00:09:39.277 Test: blockdev write zeroes read split ...passed 00:09:39.277 Test: blockdev write zeroes read split partial ...passed 00:09:39.277 Test: blockdev reset ...[2024-12-05 13:59:45.407002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:39.277 [2024-12-05 13:59:45.407103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cc970 (9): Bad file descriptor 00:09:39.277 [2024-12-05 13:59:45.464648] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:39.277 passed 00:09:39.277 Test: blockdev write read 8 blocks ...passed 00:09:39.277 Test: blockdev write read size > 128k ...passed 00:09:39.277 Test: blockdev write read invalid size ...passed 00:09:39.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:39.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:39.278 Test: blockdev write read max offset ...passed 00:09:39.537 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:39.537 Test: blockdev writev readv 8 blocks ...passed 00:09:39.537 Test: blockdev writev readv 30 x 1block ...passed 00:09:39.537 Test: blockdev writev readv block ...passed 00:09:39.537 Test: blockdev writev readv size > 128k ...passed 00:09:39.537 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:39.537 Test: blockdev comparev and writev ...[2024-12-05 13:59:45.685966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.537 [2024-12-05 13:59:45.686022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:39.537 [2024-12-05 13:59:45.686040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.686048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.686427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.686439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.686459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.686468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.686849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.686865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.686879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.686887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.687294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.687306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.687320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:39.538 [2024-12-05 13:59:45.687328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:39.538 passed 00:09:39.538 Test: blockdev nvme passthru rw ...passed 00:09:39.538 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:59:45.771924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.538 [2024-12-05 13:59:45.771938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.772155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.538 [2024-12-05 13:59:45.772165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.772377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.538 [2024-12-05 13:59:45.772387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:39.538 [2024-12-05 13:59:45.772614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:39.538 [2024-12-05 13:59:45.772624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:39.538 passed 00:09:39.538 Test: blockdev nvme admin passthru ...passed 00:09:39.538 Test: blockdev copy ...passed 00:09:39.538 00:09:39.538 Run Summary: Type Total Ran Passed Failed Inactive 00:09:39.538 suites 1 1 n/a 0 0 00:09:39.538 tests 23 23 23 0 0 00:09:39.538 asserts 152 152 152 0 n/a 00:09:39.538 00:09:39.538 Elapsed time = 1.119 seconds 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:39.797 rmmod nvme_tcp 00:09:39.797 rmmod nvme_fabrics 00:09:39.797 rmmod nvme_keyring 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2594081 ']' 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2594081 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2594081 ']' 00:09:39.797 13:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2594081 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594081 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594081' 00:09:39.797 killing process with pid 2594081 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2594081 00:09:39.797 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2594081 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.057 13:59:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.970 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.230 00:09:42.231 real 0m12.091s 00:09:42.231 user 0m13.599s 00:09:42.231 sys 0m6.139s 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.231 ************************************ 00:09:42.231 END TEST nvmf_bdevio 00:09:42.231 ************************************ 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:42.231 00:09:42.231 real 5m4.183s 00:09:42.231 user 11m54.069s 00:09:42.231 sys 1m51.960s 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.231 ************************************ 00:09:42.231 END TEST nvmf_target_core 00:09:42.231 ************************************ 00:09:42.231 13:59:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:42.231 13:59:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.231 13:59:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.231 13:59:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:42.231 ************************************ 00:09:42.231 START TEST nvmf_target_extra 00:09:42.231 ************************************ 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:42.231 * Looking for test storage... 00:09:42.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.231 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.492 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.493 --rc genhtml_branch_coverage=1 00:09:42.493 --rc genhtml_function_coverage=1 00:09:42.493 --rc genhtml_legend=1 00:09:42.493 --rc geninfo_all_blocks=1 00:09:42.493 --rc geninfo_unexecuted_blocks=1 00:09:42.493 00:09:42.493 ' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.493 --rc genhtml_branch_coverage=1 00:09:42.493 --rc genhtml_function_coverage=1 00:09:42.493 --rc genhtml_legend=1 00:09:42.493 --rc geninfo_all_blocks=1 00:09:42.493 --rc geninfo_unexecuted_blocks=1 00:09:42.493 00:09:42.493 ' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.493 --rc genhtml_branch_coverage=1 00:09:42.493 --rc genhtml_function_coverage=1 00:09:42.493 --rc genhtml_legend=1 00:09:42.493 --rc geninfo_all_blocks=1 00:09:42.493 --rc geninfo_unexecuted_blocks=1 00:09:42.493 00:09:42.493 ' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.493 --rc genhtml_branch_coverage=1 00:09:42.493 --rc genhtml_function_coverage=1 00:09:42.493 --rc genhtml_legend=1 00:09:42.493 --rc geninfo_all_blocks=1 00:09:42.493 --rc geninfo_unexecuted_blocks=1 00:09:42.493 00:09:42.493 ' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:42.493 ************************************ 00:09:42.493 START TEST nvmf_example 00:09:42.493 ************************************ 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:42.493 * Looking for test storage... 00:09:42.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.493 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.755 --rc genhtml_branch_coverage=1 00:09:42.755 --rc genhtml_function_coverage=1 00:09:42.755 --rc genhtml_legend=1 00:09:42.755 --rc geninfo_all_blocks=1 00:09:42.755 --rc geninfo_unexecuted_blocks=1 00:09:42.755 00:09:42.755 ' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.755 --rc genhtml_branch_coverage=1 00:09:42.755 --rc genhtml_function_coverage=1 00:09:42.755 --rc genhtml_legend=1 00:09:42.755 --rc geninfo_all_blocks=1 00:09:42.755 --rc geninfo_unexecuted_blocks=1 00:09:42.755 00:09:42.755 ' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.755 --rc genhtml_branch_coverage=1 00:09:42.755 --rc genhtml_function_coverage=1 00:09:42.755 --rc genhtml_legend=1 00:09:42.755 --rc geninfo_all_blocks=1 00:09:42.755 --rc geninfo_unexecuted_blocks=1 00:09:42.755 00:09:42.755 ' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.755 --rc genhtml_branch_coverage=1 00:09:42.755 --rc genhtml_function_coverage=1 00:09:42.755 --rc genhtml_legend=1 00:09:42.755 --rc geninfo_all_blocks=1 00:09:42.755 --rc geninfo_unexecuted_blocks=1 00:09:42.755 00:09:42.755 ' 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.755 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.756 13:59:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.898 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:50.899 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:50.899 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:50.899 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:50.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:09:50.899 00:09:50.899 --- 10.0.0.2 ping statistics --- 00:09:50.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.899 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:09:50.899 00:09:50.899 --- 10.0.0.1 ping statistics --- 00:09:50.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.899 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2598847 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2598847 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2598847 ']' 00:09:50.899 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.900 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.900 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.900 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.900 13:59:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:51.160 13:59:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:01.321 Initializing NVMe Controllers 00:10:01.321 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:01.321 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:01.321 Initialization complete. Launching workers. 00:10:01.321 ======================================================== 00:10:01.321 Latency(us) 00:10:01.321 Device Information : IOPS MiB/s Average min max 00:10:01.321 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18919.77 73.91 3382.27 426.41 16431.42 00:10:01.321 ======================================================== 00:10:01.321 Total : 18919.77 73.91 3382.27 426.41 16431.42 00:10:01.321 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.321 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.582 rmmod nvme_tcp 00:10:01.582 rmmod nvme_fabrics 00:10:01.582 rmmod nvme_keyring 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2598847 ']' 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2598847 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2598847 ']' 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2598847 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2598847 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2598847' 00:10:01.582 killing process with pid 2598847 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2598847 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2598847 00:10:01.582 nvmf threads initialize successfully 00:10:01.582 bdev subsystem init successfully 00:10:01.582 created a nvmf target service 00:10:01.582 create targets's poll groups done 00:10:01.582 all subsystems of target started 00:10:01.582 nvmf target is running 00:10:01.582 all subsystems of target stopped 00:10:01.582 destroy targets's poll groups done 00:10:01.582 destroyed the nvmf target service 00:10:01.582 bdev subsystem finish successfully 00:10:01.582 nvmf threads destroy successfully 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.582 14:00:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.136 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.136 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:04.136 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.136 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.136 00:10:04.136 real 0m21.307s 00:10:04.136 user 0m46.314s 00:10:04.136 sys 0m6.989s 00:10:04.136 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.136 14:00:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.136 ************************************ 00:10:04.136 END TEST nvmf_example 00:10:04.136 ************************************ 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.136 ************************************ 00:10:04.136 START TEST nvmf_filesystem 00:10:04.136 ************************************ 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:04.136 * Looking for test storage... 00:10:04.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.136 --rc genhtml_branch_coverage=1 00:10:04.136 --rc genhtml_function_coverage=1 00:10:04.136 --rc genhtml_legend=1 00:10:04.136 --rc geninfo_all_blocks=1 00:10:04.136 --rc geninfo_unexecuted_blocks=1 00:10:04.136 00:10:04.136 ' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.136 --rc genhtml_branch_coverage=1 00:10:04.136 --rc genhtml_function_coverage=1 00:10:04.136 --rc genhtml_legend=1 00:10:04.136 --rc geninfo_all_blocks=1 00:10:04.136 --rc geninfo_unexecuted_blocks=1 00:10:04.136 00:10:04.136 ' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.136 --rc genhtml_branch_coverage=1 00:10:04.136 --rc genhtml_function_coverage=1 00:10:04.136 --rc genhtml_legend=1 00:10:04.136 --rc geninfo_all_blocks=1 00:10:04.136 --rc geninfo_unexecuted_blocks=1 00:10:04.136 00:10:04.136 ' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.136 --rc genhtml_branch_coverage=1 00:10:04.136 --rc genhtml_function_coverage=1 00:10:04.136 --rc genhtml_legend=1 00:10:04.136 --rc geninfo_all_blocks=1 00:10:04.136 --rc geninfo_unexecuted_blocks=1 00:10:04.136 00:10:04.136 ' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:04.136 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:04.137 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:04.137 #define SPDK_CONFIG_H 00:10:04.137 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:04.137 #define SPDK_CONFIG_APPS 1 00:10:04.137 #define SPDK_CONFIG_ARCH native 00:10:04.137 #undef SPDK_CONFIG_ASAN 00:10:04.137 #undef SPDK_CONFIG_AVAHI 00:10:04.137 #undef SPDK_CONFIG_CET 00:10:04.137 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:04.137 #define SPDK_CONFIG_COVERAGE 1 00:10:04.137 #define SPDK_CONFIG_CROSS_PREFIX 00:10:04.137 #undef SPDK_CONFIG_CRYPTO 00:10:04.137 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:04.137 #undef SPDK_CONFIG_CUSTOMOCF 00:10:04.137 #undef SPDK_CONFIG_DAOS 00:10:04.137 #define SPDK_CONFIG_DAOS_DIR 00:10:04.137 #define SPDK_CONFIG_DEBUG 1 00:10:04.137 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:04.137 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:04.137 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:04.137 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:04.138 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:04.138 #undef SPDK_CONFIG_DPDK_UADK 00:10:04.138 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:04.138 #define SPDK_CONFIG_EXAMPLES 1 00:10:04.138 #undef SPDK_CONFIG_FC 00:10:04.138 #define SPDK_CONFIG_FC_PATH 00:10:04.138 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:04.138 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:04.138 #define SPDK_CONFIG_FSDEV 1 00:10:04.138 #undef SPDK_CONFIG_FUSE 00:10:04.138 #undef SPDK_CONFIG_FUZZER 00:10:04.138 #define SPDK_CONFIG_FUZZER_LIB 00:10:04.138 #undef SPDK_CONFIG_GOLANG 00:10:04.138 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:04.138 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:04.138 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:04.138 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:04.138 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:04.138 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:04.138 #undef SPDK_CONFIG_HAVE_LZ4 00:10:04.138 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:04.138 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:04.138 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:04.138 #define SPDK_CONFIG_IDXD 1 00:10:04.138 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:04.138 #undef SPDK_CONFIG_IPSEC_MB 00:10:04.138 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:04.138 #define SPDK_CONFIG_ISAL 1 00:10:04.138 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:04.138 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:04.138 #define SPDK_CONFIG_LIBDIR 00:10:04.138 #undef SPDK_CONFIG_LTO 00:10:04.138 #define SPDK_CONFIG_MAX_LCORES 128 00:10:04.138 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:04.138 #define SPDK_CONFIG_NVME_CUSE 1 00:10:04.138 #undef SPDK_CONFIG_OCF 00:10:04.138 #define SPDK_CONFIG_OCF_PATH 00:10:04.138 #define SPDK_CONFIG_OPENSSL_PATH 00:10:04.138 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:04.138 #define SPDK_CONFIG_PGO_DIR 00:10:04.138 #undef SPDK_CONFIG_PGO_USE 00:10:04.138 #define SPDK_CONFIG_PREFIX /usr/local 00:10:04.138 #undef SPDK_CONFIG_RAID5F 00:10:04.138 #undef SPDK_CONFIG_RBD 00:10:04.138 #define SPDK_CONFIG_RDMA 1 00:10:04.138 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:04.138 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:04.138 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:04.138 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:04.138 #define SPDK_CONFIG_SHARED 1 00:10:04.138 #undef SPDK_CONFIG_SMA 00:10:04.138 #define SPDK_CONFIG_TESTS 1 00:10:04.138 #undef SPDK_CONFIG_TSAN 00:10:04.138 #define SPDK_CONFIG_UBLK 1 00:10:04.138 #define SPDK_CONFIG_UBSAN 1 00:10:04.138 #undef SPDK_CONFIG_UNIT_TESTS 00:10:04.138 #undef SPDK_CONFIG_URING 00:10:04.138 #define SPDK_CONFIG_URING_PATH 00:10:04.138 #undef SPDK_CONFIG_URING_ZNS 00:10:04.138 #undef SPDK_CONFIG_USDT 00:10:04.138 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:04.138 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:04.138 #define SPDK_CONFIG_VFIO_USER 1 00:10:04.138 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:04.138 #define SPDK_CONFIG_VHOST 1 00:10:04.138 #define SPDK_CONFIG_VIRTIO 1 00:10:04.138 #undef SPDK_CONFIG_VTUNE 00:10:04.138 #define SPDK_CONFIG_VTUNE_DIR 00:10:04.138 #define SPDK_CONFIG_WERROR 1 00:10:04.138 #define SPDK_CONFIG_WPDK_DIR 00:10:04.138 #undef SPDK_CONFIG_XNVME 00:10:04.138 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:04.138 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:04.139 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:04.140 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2602058 ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2602058 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.EelKPZ 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EelKPZ/tests/target /tmp/spdk.EelKPZ 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122526765056 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356529664 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6829764608 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677642240 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=622592 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:04.141 * Looking for test storage... 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122526765056 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9044357120 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.141 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:04.142 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.404 --rc genhtml_branch_coverage=1 00:10:04.404 --rc genhtml_function_coverage=1 00:10:04.404 --rc genhtml_legend=1 00:10:04.404 --rc geninfo_all_blocks=1 00:10:04.404 --rc geninfo_unexecuted_blocks=1 00:10:04.404 00:10:04.404 ' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.404 --rc genhtml_branch_coverage=1 00:10:04.404 --rc genhtml_function_coverage=1 00:10:04.404 --rc genhtml_legend=1 00:10:04.404 --rc geninfo_all_blocks=1 00:10:04.404 --rc geninfo_unexecuted_blocks=1 00:10:04.404 00:10:04.404 ' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.404 --rc genhtml_branch_coverage=1 00:10:04.404 --rc genhtml_function_coverage=1 00:10:04.404 --rc genhtml_legend=1 00:10:04.404 --rc geninfo_all_blocks=1 00:10:04.404 --rc geninfo_unexecuted_blocks=1 00:10:04.404 00:10:04.404 ' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:04.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.404 --rc genhtml_branch_coverage=1 00:10:04.404 --rc genhtml_function_coverage=1 00:10:04.404 --rc genhtml_legend=1 00:10:04.404 --rc geninfo_all_blocks=1 00:10:04.404 --rc geninfo_unexecuted_blocks=1 00:10:04.404 00:10:04.404 ' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.404 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.405 14:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:12.541 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:12.541 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:12.541 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:12.541 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.541 14:00:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.541 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.541 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.541 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.541 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:10:12.541 00:10:12.541 --- 10.0.0.2 ping statistics --- 00:10:12.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.541 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:10:12.541 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:10:12.541 00:10:12.542 --- 10.0.0.1 ping statistics --- 00:10:12.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.542 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.542 ************************************ 00:10:12.542 START TEST nvmf_filesystem_no_in_capsule 00:10:12.542 ************************************ 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2606035 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2606035 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2606035 ']' 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.542 14:00:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.542 [2024-12-05 14:00:18.187526] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:10:12.542 [2024-12-05 14:00:18.187611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.542 [2024-12-05 14:00:18.287045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.542 [2024-12-05 14:00:18.340552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.542 [2024-12-05 14:00:18.340603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.542 [2024-12-05 14:00:18.340611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.542 [2024-12-05 14:00:18.340623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.542 [2024-12-05 14:00:18.340630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.542 [2024-12-05 14:00:18.343065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.542 [2024-12-05 14:00:18.343224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.542 [2024-12-05 14:00:18.343381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.542 [2024-12-05 14:00:18.343382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.803 [2024-12-05 14:00:19.052383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.803 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.066 Malloc1 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.066 [2024-12-05 14:00:19.213690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:13.066 { 00:10:13.066 "name": "Malloc1", 00:10:13.066 "aliases": [ 00:10:13.066 "271d69f3-da86-4a15-ac78-71c70f9463c3" 00:10:13.066 ], 00:10:13.066 "product_name": "Malloc disk", 00:10:13.066 "block_size": 512, 00:10:13.066 "num_blocks": 1048576, 00:10:13.066 "uuid": "271d69f3-da86-4a15-ac78-71c70f9463c3", 00:10:13.066 "assigned_rate_limits": { 00:10:13.066 "rw_ios_per_sec": 0, 00:10:13.066 "rw_mbytes_per_sec": 0, 00:10:13.066 "r_mbytes_per_sec": 0, 00:10:13.066 "w_mbytes_per_sec": 0 00:10:13.066 }, 00:10:13.066 "claimed": true, 00:10:13.066 "claim_type": "exclusive_write", 00:10:13.066 "zoned": false, 00:10:13.066 "supported_io_types": { 00:10:13.066 "read": true, 00:10:13.066 "write": true, 00:10:13.066 "unmap": true, 00:10:13.066 "flush": true, 00:10:13.066 "reset": true, 00:10:13.066 "nvme_admin": false, 00:10:13.066 "nvme_io": false, 00:10:13.066 "nvme_io_md": false, 00:10:13.066 "write_zeroes": true, 00:10:13.066 "zcopy": true, 00:10:13.066 "get_zone_info": false, 00:10:13.066 "zone_management": false, 00:10:13.066 "zone_append": false, 00:10:13.066 "compare": false, 00:10:13.066 "compare_and_write": false, 00:10:13.066 "abort": true, 00:10:13.066 "seek_hole": false, 00:10:13.066 "seek_data": false, 00:10:13.066 "copy": true, 00:10:13.066 "nvme_iov_md": false 00:10:13.066 }, 00:10:13.066 "memory_domains": [ 00:10:13.066 { 00:10:13.066 "dma_device_id": "system", 00:10:13.066 "dma_device_type": 1 00:10:13.066 }, 00:10:13.066 { 00:10:13.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.066 "dma_device_type": 2 00:10:13.066 } 00:10:13.066 ], 00:10:13.066 "driver_specific": {} 00:10:13.066 } 00:10:13.066 ]' 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:13.066 14:00:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.984 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.984 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.984 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.984 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.984 14:00:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.896 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.896 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.896 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.896 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:16.897 14:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:17.157 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:17.157 14:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:18.099 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:18.099 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:18.099 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.099 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.099 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.359 ************************************ 00:10:18.359 START TEST filesystem_ext4 00:10:18.359 ************************************ 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:18.359 14:00:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:18.359 mke2fs 1.47.0 (5-Feb-2023) 00:10:18.359 Discarding device blocks: 0/522240 done 00:10:18.359 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:18.359 Filesystem UUID: a209e3b3-ffcc-463a-969e-51245b3ca401 00:10:18.359 Superblock backups stored on blocks: 00:10:18.359 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:18.359 00:10:18.359 Allocating group tables: 0/64 done 00:10:18.359 Writing inode tables: 0/64 done 00:10:18.359 Creating journal (8192 blocks): done 00:10:20.310 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:10:20.310 00:10:20.310 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:20.310 14:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2606035 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.896 00:10:26.896 real 0m7.695s 00:10:26.896 user 0m0.029s 00:10:26.896 sys 0m0.053s 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:26.896 ************************************ 00:10:26.896 END TEST filesystem_ext4 00:10:26.896 ************************************ 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.896 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.897 ************************************ 00:10:26.897 START TEST filesystem_btrfs 00:10:26.897 ************************************ 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:26.897 btrfs-progs v6.8.1 00:10:26.897 See https://btrfs.readthedocs.io for more information. 00:10:26.897 00:10:26.897 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:26.897 NOTE: several default settings have changed in version 5.15, please make sure 00:10:26.897 this does not affect your deployments: 00:10:26.897 - DUP for metadata (-m dup) 00:10:26.897 - enabled no-holes (-O no-holes) 00:10:26.897 - enabled free-space-tree (-R free-space-tree) 00:10:26.897 00:10:26.897 Label: (null) 00:10:26.897 UUID: 80b4120d-038e-4e27-ad6b-b7fd372505e9 00:10:26.897 Node size: 16384 00:10:26.897 Sector size: 4096 (CPU page size: 4096) 00:10:26.897 Filesystem size: 510.00MiB 00:10:26.897 Block group profiles: 00:10:26.897 Data: single 8.00MiB 00:10:26.897 Metadata: DUP 32.00MiB 00:10:26.897 System: DUP 8.00MiB 00:10:26.897 SSD detected: yes 00:10:26.897 Zoned device: no 00:10:26.897 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:26.897 Checksum: crc32c 00:10:26.897 Number of devices: 1 00:10:26.897 Devices: 00:10:26.897 ID SIZE PATH 00:10:26.897 1 510.00MiB /dev/nvme0n1p1 00:10:26.897 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2606035 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.897 00:10:26.897 real 0m0.709s 00:10:26.897 user 0m0.024s 00:10:26.897 sys 0m0.064s 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:26.897 ************************************ 00:10:26.897 END TEST filesystem_btrfs 00:10:26.897 ************************************ 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.897 ************************************ 00:10:26.897 START TEST filesystem_xfs 00:10:26.897 ************************************ 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:26.897 14:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:27.466 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:27.466 = sectsz=512 attr=2, projid32bit=1 00:10:27.466 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:27.466 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:27.466 data = bsize=4096 blocks=130560, imaxpct=25 00:10:27.466 = sunit=0 swidth=0 blks 00:10:27.466 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:27.466 log =internal log bsize=4096 blocks=16384, version=2 00:10:27.466 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:27.466 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:28.404 Discarding blocks...Done. 00:10:28.404 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.404 14:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2606035 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.955 00:10:30.955 real 0m4.153s 00:10:30.955 user 0m0.027s 00:10:30.955 sys 0m0.056s 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:30.955 ************************************ 00:10:30.955 END TEST filesystem_xfs 00:10:30.955 ************************************ 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:30.955 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:31.213 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2606035 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2606035 ']' 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2606035 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2606035 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2606035' 00:10:31.473 killing process with pid 2606035 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2606035 00:10:31.473 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2606035 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:31.733 00:10:31.733 real 0m19.764s 00:10:31.733 user 1m18.032s 00:10:31.733 sys 0m1.310s 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.733 ************************************ 00:10:31.733 END TEST nvmf_filesystem_no_in_capsule 00:10:31.733 ************************************ 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.733 ************************************ 00:10:31.733 START TEST nvmf_filesystem_in_capsule 00:10:31.733 ************************************ 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2610111 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2610111 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2610111 ']' 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.733 14:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.733 [2024-12-05 14:00:38.016524] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:10:31.733 [2024-12-05 14:00:38.016573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.994 [2024-12-05 14:00:38.107694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.994 [2024-12-05 14:00:38.138975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.994 [2024-12-05 14:00:38.139001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.995 [2024-12-05 14:00:38.139007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.995 [2024-12-05 14:00:38.139012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.995 [2024-12-05 14:00:38.139016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.995 [2024-12-05 14:00:38.140275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.995 [2024-12-05 14:00:38.140424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.995 [2024-12-05 14:00:38.140574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.995 [2024-12-05 14:00:38.140577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.565 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 [2024-12-05 14:00:38.866280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 Malloc1 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.826 14:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 [2024-12-05 14:00:39.000271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:32.826 { 00:10:32.826 "name": "Malloc1", 00:10:32.826 "aliases": [ 00:10:32.826 "050b5222-894d-47ed-9195-e6c4906e5852" 00:10:32.826 ], 00:10:32.826 "product_name": "Malloc disk", 00:10:32.826 "block_size": 512, 00:10:32.826 "num_blocks": 1048576, 00:10:32.826 "uuid": "050b5222-894d-47ed-9195-e6c4906e5852", 00:10:32.826 "assigned_rate_limits": { 00:10:32.826 "rw_ios_per_sec": 0, 00:10:32.826 "rw_mbytes_per_sec": 0, 00:10:32.826 "r_mbytes_per_sec": 0, 00:10:32.826 "w_mbytes_per_sec": 0 00:10:32.826 }, 00:10:32.826 "claimed": true, 00:10:32.826 "claim_type": "exclusive_write", 00:10:32.826 "zoned": false, 00:10:32.826 "supported_io_types": { 00:10:32.826 "read": true, 00:10:32.826 "write": true, 00:10:32.826 "unmap": true, 00:10:32.826 "flush": true, 00:10:32.826 "reset": true, 00:10:32.826 "nvme_admin": false, 00:10:32.826 "nvme_io": false, 00:10:32.826 "nvme_io_md": false, 00:10:32.826 "write_zeroes": true, 00:10:32.826 "zcopy": true, 00:10:32.826 "get_zone_info": false, 00:10:32.826 "zone_management": false, 00:10:32.826 "zone_append": false, 00:10:32.826 "compare": false, 00:10:32.826 "compare_and_write": false, 00:10:32.826 "abort": true, 00:10:32.826 "seek_hole": false, 00:10:32.826 "seek_data": false, 00:10:32.826 "copy": true, 00:10:32.826 "nvme_iov_md": false 00:10:32.826 }, 00:10:32.826 "memory_domains": [ 00:10:32.826 { 00:10:32.826 "dma_device_id": "system", 00:10:32.826 "dma_device_type": 1 00:10:32.826 }, 00:10:32.826 { 00:10:32.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.826 "dma_device_type": 2 00:10:32.826 } 00:10:32.826 ], 00:10:32.826 "driver_specific": {} 00:10:32.826 } 00:10:32.826 ]' 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:32.826 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:33.087 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:33.087 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:33.087 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:33.087 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:33.087 14:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:34.493 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.493 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:34.493 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.493 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:34.493 14:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:36.403 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:36.404 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:36.664 14:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:36.924 14:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.308 ************************************ 00:10:38.308 START TEST filesystem_in_capsule_ext4 00:10:38.308 ************************************ 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:38.308 14:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:38.308 mke2fs 1.47.0 (5-Feb-2023) 00:10:38.308 Discarding device blocks: 0/522240 done 00:10:38.308 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:38.308 Filesystem UUID: 633f1992-9450-4c4b-b2e3-7f1bd58828b6 00:10:38.308 Superblock backups stored on blocks: 00:10:38.308 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:38.308 00:10:38.308 Allocating group tables: 0/64 done 00:10:38.308 Writing inode tables: 0/64 done 00:10:40.864 Creating journal (8192 blocks): done 00:10:40.864 Writing superblocks and filesystem accounting information: 0/64 done 00:10:40.864 00:10:40.864 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:41.125 14:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2610111 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.424 00:10:46.424 real 0m8.473s 00:10:46.424 user 0m0.031s 00:10:46.424 sys 0m0.051s 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:46.424 ************************************ 00:10:46.424 END TEST filesystem_in_capsule_ext4 00:10:46.424 ************************************ 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.424 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.685 ************************************ 00:10:46.685 START TEST filesystem_in_capsule_btrfs 00:10:46.685 ************************************ 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:46.685 btrfs-progs v6.8.1 00:10:46.685 See https://btrfs.readthedocs.io for more information. 00:10:46.685 00:10:46.685 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:46.685 NOTE: several default settings have changed in version 5.15, please make sure 00:10:46.685 this does not affect your deployments: 00:10:46.685 - DUP for metadata (-m dup) 00:10:46.685 - enabled no-holes (-O no-holes) 00:10:46.685 - enabled free-space-tree (-R free-space-tree) 00:10:46.685 00:10:46.685 Label: (null) 00:10:46.685 UUID: d945c4c7-2d9e-419f-93c7-6d313e972446 00:10:46.685 Node size: 16384 00:10:46.685 Sector size: 4096 (CPU page size: 4096) 00:10:46.685 Filesystem size: 510.00MiB 00:10:46.685 Block group profiles: 00:10:46.685 Data: single 8.00MiB 00:10:46.685 Metadata: DUP 32.00MiB 00:10:46.685 System: DUP 8.00MiB 00:10:46.685 SSD detected: yes 00:10:46.685 Zoned device: no 00:10:46.685 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:46.685 Checksum: crc32c 00:10:46.685 Number of devices: 1 00:10:46.685 Devices: 00:10:46.685 ID SIZE PATH 00:10:46.685 1 510.00MiB /dev/nvme0n1p1 00:10:46.685 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:46.685 14:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2610111 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.627 00:10:47.627 real 0m0.927s 00:10:47.627 user 0m0.027s 00:10:47.627 sys 0m0.059s 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:47.627 ************************************ 00:10:47.627 END TEST filesystem_in_capsule_btrfs 00:10:47.627 ************************************ 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.627 ************************************ 00:10:47.627 START TEST filesystem_in_capsule_xfs 00:10:47.627 ************************************ 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:47.627 14:00:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:47.627 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:47.627 = sectsz=512 attr=2, projid32bit=1 00:10:47.627 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:47.627 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:47.627 data = bsize=4096 blocks=130560, imaxpct=25 00:10:47.627 = sunit=0 swidth=0 blks 00:10:47.627 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:47.627 log =internal log bsize=4096 blocks=16384, version=2 00:10:47.627 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:47.627 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:48.590 Discarding blocks...Done. 00:10:48.591 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:48.591 14:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2610111 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.502 00:10:50.502 real 0m2.834s 00:10:50.502 user 0m0.026s 00:10:50.502 sys 0m0.055s 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.502 ************************************ 00:10:50.502 END TEST filesystem_in_capsule_xfs 00:10:50.502 ************************************ 00:10:50.502 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:50.763 14:00:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.763 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:50.763 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.763 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.763 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.763 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.763 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:50.764 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2610111 00:10:50.764 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2610111 ']' 00:10:50.764 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2610111 00:10:50.764 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:50.764 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.764 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2610111 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2610111' 00:10:51.024 killing process with pid 2610111 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2610111 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2610111 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:51.024 00:10:51.024 real 0m19.331s 00:10:51.024 user 1m16.554s 00:10:51.024 sys 0m1.203s 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.024 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.024 ************************************ 00:10:51.024 END TEST nvmf_filesystem_in_capsule 00:10:51.024 ************************************ 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.285 rmmod nvme_tcp 00:10:51.285 rmmod nvme_fabrics 00:10:51.285 rmmod nvme_keyring 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.285 14:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.200 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.200 00:10:53.200 real 0m49.415s 00:10:53.200 user 2m36.956s 00:10:53.200 sys 0m8.424s 00:10:53.200 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.200 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.200 ************************************ 00:10:53.200 END TEST nvmf_filesystem 00:10:53.200 ************************************ 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.460 ************************************ 00:10:53.460 START TEST nvmf_target_discovery 00:10:53.460 ************************************ 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:53.460 * Looking for test storage... 00:10:53.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.460 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.461 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:53.722 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:53.722 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.722 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:53.722 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.722 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.723 --rc genhtml_branch_coverage=1 00:10:53.723 --rc genhtml_function_coverage=1 00:10:53.723 --rc genhtml_legend=1 00:10:53.723 --rc geninfo_all_blocks=1 00:10:53.723 --rc geninfo_unexecuted_blocks=1 00:10:53.723 00:10:53.723 ' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.723 --rc genhtml_branch_coverage=1 00:10:53.723 --rc genhtml_function_coverage=1 00:10:53.723 --rc genhtml_legend=1 00:10:53.723 --rc geninfo_all_blocks=1 00:10:53.723 --rc geninfo_unexecuted_blocks=1 00:10:53.723 00:10:53.723 ' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.723 --rc genhtml_branch_coverage=1 00:10:53.723 --rc genhtml_function_coverage=1 00:10:53.723 --rc genhtml_legend=1 00:10:53.723 --rc geninfo_all_blocks=1 00:10:53.723 --rc geninfo_unexecuted_blocks=1 00:10:53.723 00:10:53.723 ' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.723 --rc genhtml_branch_coverage=1 00:10:53.723 --rc genhtml_function_coverage=1 00:10:53.723 --rc genhtml_legend=1 00:10:53.723 --rc geninfo_all_blocks=1 00:10:53.723 --rc geninfo_unexecuted_blocks=1 00:10:53.723 00:10:53.723 ' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.723 14:00:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:01.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:01.994 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.994 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:01.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:01.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.995 14:01:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:11:01.995 00:11:01.995 --- 10.0.0.2 ping statistics --- 00:11:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.995 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:11:01.995 00:11:01.995 --- 10.0.0.1 ping statistics --- 00:11:01.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.995 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2618354 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2618354 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2618354 ']' 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.995 14:01:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.995 [2024-12-05 14:01:07.311544] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:11:01.995 [2024-12-05 14:01:07.311613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.995 [2024-12-05 14:01:07.414391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.995 [2024-12-05 14:01:07.467528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:01.995 [2024-12-05 14:01:07.467583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:01.995 [2024-12-05 14:01:07.467592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:01.995 [2024-12-05 14:01:07.467604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:01.995 [2024-12-05 14:01:07.467610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:01.995 [2024-12-05 14:01:07.469670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.995 [2024-12-05 14:01:07.469828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.995 [2024-12-05 14:01:07.469988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.995 [2024-12-05 14:01:07.469989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.995 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 [2024-12-05 14:01:08.192100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 Null1 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 [2024-12-05 14:01:08.269756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.996 Null2 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.996 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.256 Null3 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:02.256 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 Null4 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.257 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:02.517 00:11:02.517 Discovery Log Number of Records 6, Generation counter 6 00:11:02.517 =====Discovery Log Entry 0====== 00:11:02.517 trtype: tcp 00:11:02.517 adrfam: ipv4 00:11:02.517 subtype: current discovery subsystem 00:11:02.517 treq: not required 00:11:02.517 portid: 0 00:11:02.517 trsvcid: 4420 00:11:02.518 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.518 traddr: 10.0.0.2 00:11:02.518 eflags: explicit discovery connections, duplicate discovery information 00:11:02.518 sectype: none 00:11:02.518 =====Discovery Log Entry 1====== 00:11:02.518 trtype: tcp 00:11:02.518 adrfam: ipv4 00:11:02.518 subtype: nvme subsystem 00:11:02.518 treq: not required 00:11:02.518 portid: 0 00:11:02.518 trsvcid: 4420 00:11:02.518 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:02.518 traddr: 10.0.0.2 00:11:02.518 eflags: none 00:11:02.518 sectype: none 00:11:02.518 =====Discovery Log Entry 2====== 00:11:02.518 trtype: tcp 00:11:02.518 adrfam: ipv4 00:11:02.518 subtype: nvme subsystem 00:11:02.518 treq: not required 00:11:02.518 portid: 0 00:11:02.518 trsvcid: 4420 00:11:02.518 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:02.518 traddr: 10.0.0.2 00:11:02.518 eflags: none 00:11:02.518 sectype: none 00:11:02.518 =====Discovery Log Entry 3====== 00:11:02.518 trtype: tcp 00:11:02.518 adrfam: ipv4 00:11:02.518 subtype: nvme subsystem 00:11:02.518 treq: not required 00:11:02.518 portid: 0 00:11:02.518 trsvcid: 4420 00:11:02.518 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:02.518 traddr: 10.0.0.2 00:11:02.518 eflags: none 00:11:02.518 sectype: none 00:11:02.518 =====Discovery Log Entry 4====== 00:11:02.518 trtype: tcp 00:11:02.518 adrfam: ipv4 00:11:02.518 subtype: nvme subsystem 00:11:02.518 treq: not required 00:11:02.518 portid: 0 00:11:02.518 trsvcid: 4420 00:11:02.518 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:02.518 traddr: 10.0.0.2 00:11:02.518 eflags: none 00:11:02.518 sectype: none 00:11:02.518 =====Discovery Log Entry 5====== 00:11:02.518 trtype: tcp 00:11:02.518 adrfam: ipv4 00:11:02.518 subtype: discovery subsystem referral 00:11:02.518 treq: not required 00:11:02.518 portid: 0 00:11:02.518 trsvcid: 4430 00:11:02.518 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.518 traddr: 10.0.0.2 00:11:02.518 eflags: none 00:11:02.518 sectype: none 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:02.518 Perform nvmf subsystem discovery via RPC 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 [ 00:11:02.518 { 00:11:02.518 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:02.518 "subtype": "Discovery", 00:11:02.518 "listen_addresses": [ 00:11:02.518 { 00:11:02.518 "trtype": "TCP", 00:11:02.518 "adrfam": "IPv4", 00:11:02.518 "traddr": "10.0.0.2", 00:11:02.518 "trsvcid": "4420" 00:11:02.518 } 00:11:02.518 ], 00:11:02.518 "allow_any_host": true, 00:11:02.518 "hosts": [] 00:11:02.518 }, 00:11:02.518 { 00:11:02.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.518 "subtype": "NVMe", 00:11:02.518 "listen_addresses": [ 00:11:02.518 { 00:11:02.518 "trtype": "TCP", 00:11:02.518 "adrfam": "IPv4", 00:11:02.518 "traddr": "10.0.0.2", 00:11:02.518 "trsvcid": "4420" 00:11:02.518 } 00:11:02.518 ], 00:11:02.518 "allow_any_host": true, 00:11:02.518 "hosts": [], 00:11:02.518 "serial_number": "SPDK00000000000001", 00:11:02.518 "model_number": "SPDK bdev Controller", 00:11:02.518 "max_namespaces": 32, 00:11:02.518 "min_cntlid": 1, 00:11:02.518 "max_cntlid": 65519, 00:11:02.518 "namespaces": [ 00:11:02.518 { 00:11:02.518 "nsid": 1, 00:11:02.518 "bdev_name": "Null1", 00:11:02.518 "name": "Null1", 00:11:02.518 "nguid": "1E4749D77D3A4E74B359EB0A1DCFDC22", 00:11:02.518 "uuid": "1e4749d7-7d3a-4e74-b359-eb0a1dcfdc22" 00:11:02.518 } 00:11:02.518 ] 00:11:02.518 }, 00:11:02.518 { 00:11:02.518 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:02.518 "subtype": "NVMe", 00:11:02.518 "listen_addresses": [ 00:11:02.518 { 00:11:02.518 "trtype": "TCP", 00:11:02.518 "adrfam": "IPv4", 00:11:02.518 "traddr": "10.0.0.2", 00:11:02.518 "trsvcid": "4420" 00:11:02.518 } 00:11:02.518 ], 00:11:02.518 "allow_any_host": true, 00:11:02.518 "hosts": [], 00:11:02.518 "serial_number": "SPDK00000000000002", 00:11:02.518 "model_number": "SPDK bdev Controller", 00:11:02.518 "max_namespaces": 32, 00:11:02.518 "min_cntlid": 1, 00:11:02.518 "max_cntlid": 65519, 00:11:02.518 "namespaces": [ 00:11:02.518 { 00:11:02.518 "nsid": 1, 00:11:02.518 "bdev_name": "Null2", 00:11:02.518 "name": "Null2", 00:11:02.518 "nguid": "1ECAA9B8B03A4A5B8205CFA02975A71D", 00:11:02.518 "uuid": "1ecaa9b8-b03a-4a5b-8205-cfa02975a71d" 00:11:02.518 } 00:11:02.518 ] 00:11:02.518 }, 00:11:02.518 { 00:11:02.518 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:02.518 "subtype": "NVMe", 00:11:02.518 "listen_addresses": [ 00:11:02.518 { 00:11:02.518 "trtype": "TCP", 00:11:02.518 "adrfam": "IPv4", 00:11:02.518 "traddr": "10.0.0.2", 00:11:02.518 "trsvcid": "4420" 00:11:02.518 } 00:11:02.518 ], 00:11:02.518 "allow_any_host": true, 00:11:02.518 "hosts": [], 00:11:02.518 "serial_number": "SPDK00000000000003", 00:11:02.518 "model_number": "SPDK bdev Controller", 00:11:02.518 "max_namespaces": 32, 00:11:02.518 "min_cntlid": 1, 00:11:02.518 "max_cntlid": 65519, 00:11:02.518 "namespaces": [ 00:11:02.518 { 00:11:02.518 "nsid": 1, 00:11:02.518 "bdev_name": "Null3", 00:11:02.518 "name": "Null3", 00:11:02.518 "nguid": "0936EC03F5DC40E4A4379DC3949A04C0", 00:11:02.518 "uuid": "0936ec03-f5dc-40e4-a437-9dc3949a04c0" 00:11:02.518 } 00:11:02.518 ] 00:11:02.518 }, 00:11:02.518 { 00:11:02.518 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:02.518 "subtype": "NVMe", 00:11:02.518 "listen_addresses": [ 00:11:02.518 { 00:11:02.518 "trtype": "TCP", 00:11:02.518 "adrfam": "IPv4", 00:11:02.518 "traddr": "10.0.0.2", 00:11:02.518 "trsvcid": "4420" 00:11:02.518 } 00:11:02.518 ], 00:11:02.518 "allow_any_host": true, 00:11:02.518 "hosts": [], 00:11:02.518 "serial_number": "SPDK00000000000004", 00:11:02.518 "model_number": "SPDK bdev Controller", 00:11:02.518 "max_namespaces": 32, 00:11:02.518 "min_cntlid": 1, 00:11:02.518 "max_cntlid": 65519, 00:11:02.518 "namespaces": [ 00:11:02.518 { 00:11:02.518 "nsid": 1, 00:11:02.518 "bdev_name": "Null4", 00:11:02.518 "name": "Null4", 00:11:02.518 "nguid": "C009A5F5685045ECAA3C9D30FAF9A03E", 00:11:02.518 "uuid": "c009a5f5-6850-45ec-aa3c-9d30faf9a03e" 00:11:02.518 } 00:11:02.518 ] 00:11:02.518 } 00:11:02.518 ] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.518 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.519 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.519 rmmod nvme_tcp 00:11:02.787 rmmod nvme_fabrics 00:11:02.787 rmmod nvme_keyring 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2618354 ']' 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2618354 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2618354 ']' 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2618354 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2618354 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2618354' 00:11:02.787 killing process with pid 2618354 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2618354 00:11:02.787 14:01:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2618354 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.048 14:01:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.963 00:11:04.963 real 0m11.627s 00:11:04.963 user 0m8.977s 00:11:04.963 sys 0m5.972s 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:04.963 ************************************ 00:11:04.963 END TEST nvmf_target_discovery 00:11:04.963 ************************************ 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.963 14:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.224 ************************************ 00:11:05.224 START TEST nvmf_referrals 00:11:05.224 ************************************ 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:05.224 * Looking for test storage... 00:11:05.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.224 --rc genhtml_branch_coverage=1 00:11:05.224 --rc genhtml_function_coverage=1 00:11:05.224 --rc genhtml_legend=1 00:11:05.224 --rc geninfo_all_blocks=1 00:11:05.224 --rc geninfo_unexecuted_blocks=1 00:11:05.224 00:11:05.224 ' 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.224 --rc genhtml_branch_coverage=1 00:11:05.224 --rc genhtml_function_coverage=1 00:11:05.224 --rc genhtml_legend=1 00:11:05.224 --rc geninfo_all_blocks=1 00:11:05.224 --rc geninfo_unexecuted_blocks=1 00:11:05.224 00:11:05.224 ' 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.224 --rc genhtml_branch_coverage=1 00:11:05.224 --rc genhtml_function_coverage=1 00:11:05.224 --rc genhtml_legend=1 00:11:05.224 --rc geninfo_all_blocks=1 00:11:05.224 --rc geninfo_unexecuted_blocks=1 00:11:05.224 00:11:05.224 ' 00:11:05.224 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:05.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.224 --rc genhtml_branch_coverage=1 00:11:05.224 --rc genhtml_function_coverage=1 00:11:05.224 --rc genhtml_legend=1 00:11:05.224 --rc geninfo_all_blocks=1 00:11:05.224 --rc geninfo_unexecuted_blocks=1 00:11:05.224 00:11:05.225 ' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.225 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.486 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:05.486 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:05.486 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.486 14:01:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:13.628 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:13.628 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:13.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.628 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:13.629 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:11:13.629 00:11:13.629 --- 10.0.0.2 ping statistics --- 00:11:13.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.629 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:11:13.629 00:11:13.629 --- 10.0.0.1 ping statistics --- 00:11:13.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.629 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.629 14:01:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2622748 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2622748 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2622748 ']' 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.629 [2024-12-05 14:01:19.067178] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:11:13.629 [2024-12-05 14:01:19.067249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.629 [2024-12-05 14:01:19.168745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.629 [2024-12-05 14:01:19.225757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.629 [2024-12-05 14:01:19.225811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.629 [2024-12-05 14:01:19.225820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.629 [2024-12-05 14:01:19.225828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.629 [2024-12-05 14:01:19.225834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.629 [2024-12-05 14:01:19.227948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.629 [2024-12-05 14:01:19.228141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.629 [2024-12-05 14:01:19.228284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.629 [2024-12-05 14:01:19.228284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:13.629 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 [2024-12-05 14:01:19.946700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 [2024-12-05 14:01:19.980766] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:13.891 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:14.152 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.153 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:14.153 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:14.413 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.414 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.675 14:01:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:14.936 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.197 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:15.458 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:15.459 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:15.459 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:15.459 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:15.459 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:15.459 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.719 rmmod nvme_tcp 00:11:15.719 rmmod nvme_fabrics 00:11:15.719 rmmod nvme_keyring 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2622748 ']' 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2622748 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2622748 ']' 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2622748 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2622748 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2622748' 00:11:15.719 killing process with pid 2622748 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2622748 00:11:15.719 14:01:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2622748 00:11:15.980 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.980 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.980 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.980 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.981 14:01:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.890 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:17.890 00:11:17.890 real 0m12.901s 00:11:17.890 user 0m14.575s 00:11:17.890 sys 0m6.309s 00:11:17.890 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.891 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:17.891 ************************************ 00:11:17.891 END TEST nvmf_referrals 00:11:17.891 ************************************ 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.151 ************************************ 00:11:18.151 START TEST nvmf_connect_disconnect 00:11:18.151 ************************************ 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:18.151 * Looking for test storage... 00:11:18.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.151 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.152 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.152 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.152 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.152 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.152 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.413 --rc genhtml_branch_coverage=1 00:11:18.413 --rc genhtml_function_coverage=1 00:11:18.413 --rc genhtml_legend=1 00:11:18.413 --rc geninfo_all_blocks=1 00:11:18.413 --rc geninfo_unexecuted_blocks=1 00:11:18.413 00:11:18.413 ' 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.413 --rc genhtml_branch_coverage=1 00:11:18.413 --rc genhtml_function_coverage=1 00:11:18.413 --rc genhtml_legend=1 00:11:18.413 --rc geninfo_all_blocks=1 00:11:18.413 --rc geninfo_unexecuted_blocks=1 00:11:18.413 00:11:18.413 ' 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.413 --rc genhtml_branch_coverage=1 00:11:18.413 --rc genhtml_function_coverage=1 00:11:18.413 --rc genhtml_legend=1 00:11:18.413 --rc geninfo_all_blocks=1 00:11:18.413 --rc geninfo_unexecuted_blocks=1 00:11:18.413 00:11:18.413 ' 00:11:18.413 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.413 --rc genhtml_branch_coverage=1 00:11:18.413 --rc genhtml_function_coverage=1 00:11:18.414 --rc genhtml_legend=1 00:11:18.414 --rc geninfo_all_blocks=1 00:11:18.414 --rc geninfo_unexecuted_blocks=1 00:11:18.414 00:11:18.414 ' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.414 14:01:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.555 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:26.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:26.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:26.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:26.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:11:26.556 00:11:26.556 --- 10.0.0.2 ping statistics --- 00:11:26.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.556 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:11:26.556 00:11:26.556 --- 10.0.0.1 ping statistics --- 00:11:26.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.556 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.556 14:01:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.556 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2627775 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2627775 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2627775 ']' 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.557 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.557 [2024-12-05 14:01:32.100943] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:11:26.557 [2024-12-05 14:01:32.101011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.557 [2024-12-05 14:01:32.199725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.557 [2024-12-05 14:01:32.253448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.557 [2024-12-05 14:01:32.253535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.557 [2024-12-05 14:01:32.253545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.557 [2024-12-05 14:01:32.253553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.557 [2024-12-05 14:01:32.253560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.557 [2024-12-05 14:01:32.255560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.557 [2024-12-05 14:01:32.255744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.557 [2024-12-05 14:01:32.255886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.557 [2024-12-05 14:01:32.255887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.818 [2024-12-05 14:01:32.977489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.818 14:01:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:26.818 [2024-12-05 14:01:33.062536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:26.818 14:01:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:31.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.117 rmmod nvme_tcp 00:11:45.117 rmmod nvme_fabrics 00:11:45.117 rmmod nvme_keyring 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2627775 ']' 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2627775 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2627775 ']' 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2627775 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2627775 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2627775' 00:11:45.117 killing process with pid 2627775 00:11:45.117 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2627775 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2627775 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.378 14:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:47.922 00:11:47.922 real 0m29.347s 00:11:47.922 user 1m18.959s 00:11:47.922 sys 0m6.954s 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:47.922 ************************************ 00:11:47.922 END TEST nvmf_connect_disconnect 00:11:47.922 ************************************ 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.922 ************************************ 00:11:47.922 START TEST nvmf_multitarget 00:11:47.922 ************************************ 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:47.922 * Looking for test storage... 00:11:47.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.922 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.923 --rc genhtml_branch_coverage=1 00:11:47.923 --rc genhtml_function_coverage=1 00:11:47.923 --rc genhtml_legend=1 00:11:47.923 --rc geninfo_all_blocks=1 00:11:47.923 --rc geninfo_unexecuted_blocks=1 00:11:47.923 00:11:47.923 ' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.923 --rc genhtml_branch_coverage=1 00:11:47.923 --rc genhtml_function_coverage=1 00:11:47.923 --rc genhtml_legend=1 00:11:47.923 --rc geninfo_all_blocks=1 00:11:47.923 --rc geninfo_unexecuted_blocks=1 00:11:47.923 00:11:47.923 ' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.923 --rc genhtml_branch_coverage=1 00:11:47.923 --rc genhtml_function_coverage=1 00:11:47.923 --rc genhtml_legend=1 00:11:47.923 --rc geninfo_all_blocks=1 00:11:47.923 --rc geninfo_unexecuted_blocks=1 00:11:47.923 00:11:47.923 ' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.923 --rc genhtml_branch_coverage=1 00:11:47.923 --rc genhtml_function_coverage=1 00:11:47.923 --rc genhtml_legend=1 00:11:47.923 --rc geninfo_all_blocks=1 00:11:47.923 --rc geninfo_unexecuted_blocks=1 00:11:47.923 00:11:47.923 ' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.923 14:01:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:56.064 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:56.064 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:56.064 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:56.064 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.064 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:56.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:11:56.065 00:11:56.065 --- 10.0.0.2 ping statistics --- 00:11:56.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.065 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:11:56.065 00:11:56.065 --- 10.0.0.1 ping statistics --- 00:11:56.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.065 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2635738 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2635738 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2635738 ']' 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.065 14:02:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:56.065 [2024-12-05 14:02:01.519270] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:11:56.065 [2024-12-05 14:02:01.519341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.065 [2024-12-05 14:02:01.622001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.065 [2024-12-05 14:02:01.676110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.065 [2024-12-05 14:02:01.676166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.065 [2024-12-05 14:02:01.676174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.065 [2024-12-05 14:02:01.676182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.065 [2024-12-05 14:02:01.676187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.065 [2024-12-05 14:02:01.678321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.065 [2024-12-05 14:02:01.678513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.065 [2024-12-05 14:02:01.678599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.065 [2024-12-05 14:02:01.678601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.065 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.065 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:56.065 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:56.065 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.065 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:56.328 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.328 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:56.328 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:56.328 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:56.328 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:56.328 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:56.328 "nvmf_tgt_1" 00:11:56.329 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:56.591 "nvmf_tgt_2" 00:11:56.591 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:56.591 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:56.591 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:56.591 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:56.852 true 00:11:56.852 14:02:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:56.852 true 00:11:56.852 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:56.852 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:57.114 rmmod nvme_tcp 00:11:57.114 rmmod nvme_fabrics 00:11:57.114 rmmod nvme_keyring 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2635738 ']' 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2635738 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2635738 ']' 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2635738 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2635738 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2635738' 00:11:57.114 killing process with pid 2635738 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2635738 00:11:57.114 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2635738 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.376 14:02:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.307 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.307 00:11:59.307 real 0m11.886s 00:11:59.307 user 0m10.215s 00:11:59.307 sys 0m6.260s 00:11:59.307 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.307 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:59.307 ************************************ 00:11:59.307 END TEST nvmf_multitarget 00:11:59.307 ************************************ 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.568 ************************************ 00:11:59.568 START TEST nvmf_rpc 00:11:59.568 ************************************ 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:59.568 * Looking for test storage... 00:11:59.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.568 --rc genhtml_branch_coverage=1 00:11:59.568 --rc genhtml_function_coverage=1 00:11:59.568 --rc genhtml_legend=1 00:11:59.568 --rc geninfo_all_blocks=1 00:11:59.568 --rc geninfo_unexecuted_blocks=1 00:11:59.568 00:11:59.568 ' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.568 --rc genhtml_branch_coverage=1 00:11:59.568 --rc genhtml_function_coverage=1 00:11:59.568 --rc genhtml_legend=1 00:11:59.568 --rc geninfo_all_blocks=1 00:11:59.568 --rc geninfo_unexecuted_blocks=1 00:11:59.568 00:11:59.568 ' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.568 --rc genhtml_branch_coverage=1 00:11:59.568 --rc genhtml_function_coverage=1 00:11:59.568 --rc genhtml_legend=1 00:11:59.568 --rc geninfo_all_blocks=1 00:11:59.568 --rc geninfo_unexecuted_blocks=1 00:11:59.568 00:11:59.568 ' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.568 --rc genhtml_branch_coverage=1 00:11:59.568 --rc genhtml_function_coverage=1 00:11:59.568 --rc genhtml_legend=1 00:11:59.568 --rc geninfo_all_blocks=1 00:11:59.568 --rc geninfo_unexecuted_blocks=1 00:11:59.568 00:11:59.568 ' 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.568 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.829 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.830 14:02:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.985 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.985 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.985 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.985 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.985 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:12:07.986 00:12:07.986 --- 10.0.0.2 ping statistics --- 00:12:07.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.986 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:12:07.986 00:12:07.986 --- 10.0.0.1 ping statistics --- 00:12:07.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.986 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2640321 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2640321 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2640321 ']' 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.986 14:02:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:07.986 [2024-12-05 14:02:13.483698] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:12:07.986 [2024-12-05 14:02:13.483765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.986 [2024-12-05 14:02:13.584482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.986 [2024-12-05 14:02:13.637997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.986 [2024-12-05 14:02:13.638055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.986 [2024-12-05 14:02:13.638063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.986 [2024-12-05 14:02:13.638070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.986 [2024-12-05 14:02:13.638077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.986 [2024-12-05 14:02:13.640188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.986 [2024-12-05 14:02:13.640347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.986 [2024-12-05 14:02:13.640513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.986 [2024-12-05 14:02:13.640567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:08.246 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:08.247 "tick_rate": 2400000000, 00:12:08.247 "poll_groups": [ 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_000", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [] 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_001", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [] 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_002", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [] 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_003", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [] 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.247 [2024-12-05 14:02:14.483810] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:08.247 "tick_rate": 2400000000, 00:12:08.247 "poll_groups": [ 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_000", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [ 00:12:08.247 { 00:12:08.247 "trtype": "TCP" 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_001", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [ 00:12:08.247 { 00:12:08.247 "trtype": "TCP" 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_002", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [ 00:12:08.247 { 00:12:08.247 "trtype": "TCP" 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }, 00:12:08.247 { 00:12:08.247 "name": "nvmf_tgt_poll_group_003", 00:12:08.247 "admin_qpairs": 0, 00:12:08.247 "io_qpairs": 0, 00:12:08.247 "current_admin_qpairs": 0, 00:12:08.247 "current_io_qpairs": 0, 00:12:08.247 "pending_bdev_io": 0, 00:12:08.247 "completed_nvme_io": 0, 00:12:08.247 "transports": [ 00:12:08.247 { 00:12:08.247 "trtype": "TCP" 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 } 00:12:08.247 ] 00:12:08.247 }' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:08.247 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.507 Malloc1 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.507 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.508 [2024-12-05 14:02:14.700175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:08.508 [2024-12-05 14:02:14.737210] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:08.508 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:08.508 could not add new controller: failed to write to nvme-fabrics device 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.508 14:02:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.420 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.420 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.420 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.420 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.420 14:02:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.500 [2024-12-05 14:02:18.443554] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:12.500 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:12.500 could not add new controller: failed to write to nvme-fabrics device 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.500 14:02:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.895 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.895 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.895 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.895 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:13.895 14:02:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:15.807 14:02:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.807 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.068 [2024-12-05 14:02:22.138608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.068 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.069 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.069 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.069 14:02:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.450 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.450 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.450 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.450 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.450 14:02:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 [2024-12-05 14:02:25.853458] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 14:02:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.396 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.396 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.396 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.396 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.396 14:02:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.306 [2024-12-05 14:02:29.538942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.306 14:02:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.217 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.217 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:25.217 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.217 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:25.217 14:02:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.127 [2024-12-05 14:02:33.222072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.127 14:02:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.511 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.511 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.511 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.511 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.511 14:02:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:30.420 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:30.420 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:30.420 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.682 [2024-12-05 14:02:36.915962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.682 14:02:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:32.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:32.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:32.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:32.593 14:02:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.504 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 [2024-12-05 14:02:40.588193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 [2024-12-05 14:02:40.656365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 [2024-12-05 14:02:40.724572] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.505 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.505 [2024-12-05 14:02:40.796808] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 [2024-12-05 14:02:40.869028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.766 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:34.767 "tick_rate": 2400000000, 00:12:34.767 "poll_groups": [ 00:12:34.767 { 00:12:34.767 "name": "nvmf_tgt_poll_group_000", 00:12:34.767 "admin_qpairs": 0, 00:12:34.767 "io_qpairs": 224, 00:12:34.767 "current_admin_qpairs": 0, 00:12:34.767 "current_io_qpairs": 0, 00:12:34.767 "pending_bdev_io": 0, 00:12:34.767 "completed_nvme_io": 333, 00:12:34.767 "transports": [ 00:12:34.767 { 00:12:34.767 "trtype": "TCP" 00:12:34.767 } 00:12:34.767 ] 00:12:34.767 }, 00:12:34.767 { 00:12:34.767 "name": "nvmf_tgt_poll_group_001", 00:12:34.767 "admin_qpairs": 1, 00:12:34.767 "io_qpairs": 223, 00:12:34.767 "current_admin_qpairs": 0, 00:12:34.767 "current_io_qpairs": 0, 00:12:34.767 "pending_bdev_io": 0, 00:12:34.767 "completed_nvme_io": 224, 00:12:34.767 "transports": [ 00:12:34.767 { 00:12:34.767 "trtype": "TCP" 00:12:34.767 } 00:12:34.767 ] 00:12:34.767 }, 00:12:34.767 { 00:12:34.767 "name": "nvmf_tgt_poll_group_002", 00:12:34.767 "admin_qpairs": 6, 00:12:34.767 "io_qpairs": 218, 00:12:34.767 "current_admin_qpairs": 0, 00:12:34.767 "current_io_qpairs": 0, 00:12:34.767 "pending_bdev_io": 0, 00:12:34.767 "completed_nvme_io": 221, 00:12:34.767 "transports": [ 00:12:34.767 { 00:12:34.767 "trtype": "TCP" 00:12:34.767 } 00:12:34.767 ] 00:12:34.767 }, 00:12:34.767 { 00:12:34.767 "name": "nvmf_tgt_poll_group_003", 00:12:34.767 "admin_qpairs": 0, 00:12:34.767 "io_qpairs": 224, 00:12:34.767 "current_admin_qpairs": 0, 00:12:34.767 "current_io_qpairs": 0, 00:12:34.767 "pending_bdev_io": 0, 00:12:34.767 "completed_nvme_io": 461, 00:12:34.767 "transports": [ 00:12:34.767 { 00:12:34.767 "trtype": "TCP" 00:12:34.767 } 00:12:34.767 ] 00:12:34.767 } 00:12:34.767 ] 00:12:34.767 }' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.767 14:02:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.767 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.767 rmmod nvme_tcp 00:12:35.027 rmmod nvme_fabrics 00:12:35.027 rmmod nvme_keyring 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2640321 ']' 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2640321 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2640321 ']' 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2640321 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2640321 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2640321' 00:12:35.027 killing process with pid 2640321 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2640321 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2640321 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.027 14:02:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.568 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.568 00:12:37.568 real 0m37.723s 00:12:37.568 user 1m52.639s 00:12:37.568 sys 0m7.585s 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.569 ************************************ 00:12:37.569 END TEST nvmf_rpc 00:12:37.569 ************************************ 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.569 ************************************ 00:12:37.569 START TEST nvmf_invalid 00:12:37.569 ************************************ 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:37.569 * Looking for test storage... 00:12:37.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.569 --rc genhtml_branch_coverage=1 00:12:37.569 --rc genhtml_function_coverage=1 00:12:37.569 --rc genhtml_legend=1 00:12:37.569 --rc geninfo_all_blocks=1 00:12:37.569 --rc geninfo_unexecuted_blocks=1 00:12:37.569 00:12:37.569 ' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.569 --rc genhtml_branch_coverage=1 00:12:37.569 --rc genhtml_function_coverage=1 00:12:37.569 --rc genhtml_legend=1 00:12:37.569 --rc geninfo_all_blocks=1 00:12:37.569 --rc geninfo_unexecuted_blocks=1 00:12:37.569 00:12:37.569 ' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.569 --rc genhtml_branch_coverage=1 00:12:37.569 --rc genhtml_function_coverage=1 00:12:37.569 --rc genhtml_legend=1 00:12:37.569 --rc geninfo_all_blocks=1 00:12:37.569 --rc geninfo_unexecuted_blocks=1 00:12:37.569 00:12:37.569 ' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:37.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.569 --rc genhtml_branch_coverage=1 00:12:37.569 --rc genhtml_function_coverage=1 00:12:37.569 --rc genhtml_legend=1 00:12:37.569 --rc geninfo_all_blocks=1 00:12:37.569 --rc geninfo_unexecuted_blocks=1 00:12:37.569 00:12:37.569 ' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:37.569 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.570 14:02:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:45.721 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:45.721 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:45.721 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:45.721 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.721 14:02:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:12:45.721 00:12:45.721 --- 10.0.0.2 ping statistics --- 00:12:45.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.721 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:12:45.721 00:12:45.721 --- 10.0.0.1 ping statistics --- 00:12:45.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.721 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.721 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2650173 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2650173 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2650173 ']' 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.722 14:02:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.722 [2024-12-05 14:02:51.241968] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:12:45.722 [2024-12-05 14:02:51.242039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.722 [2024-12-05 14:02:51.343630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.722 [2024-12-05 14:02:51.396568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.722 [2024-12-05 14:02:51.396623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.722 [2024-12-05 14:02:51.396631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.722 [2024-12-05 14:02:51.396639] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.722 [2024-12-05 14:02:51.396645] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.722 [2024-12-05 14:02:51.398780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.722 [2024-12-05 14:02:51.399004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.722 [2024-12-05 14:02:51.399164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.722 [2024-12-05 14:02:51.399166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:45.982 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28348 00:12:46.244 [2024-12-05 14:02:52.281618] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:46.244 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:46.244 { 00:12:46.244 "nqn": "nqn.2016-06.io.spdk:cnode28348", 00:12:46.244 "tgt_name": "foobar", 00:12:46.244 "method": "nvmf_create_subsystem", 00:12:46.244 "req_id": 1 00:12:46.244 } 00:12:46.244 Got JSON-RPC error response 00:12:46.244 response: 00:12:46.244 { 00:12:46.244 "code": -32603, 00:12:46.244 "message": "Unable to find target foobar" 00:12:46.244 }' 00:12:46.244 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:46.244 { 00:12:46.244 "nqn": "nqn.2016-06.io.spdk:cnode28348", 00:12:46.244 "tgt_name": "foobar", 00:12:46.244 "method": "nvmf_create_subsystem", 00:12:46.244 "req_id": 1 00:12:46.244 } 00:12:46.244 Got JSON-RPC error response 00:12:46.244 response: 00:12:46.244 { 00:12:46.244 "code": -32603, 00:12:46.244 "message": "Unable to find target foobar" 00:12:46.244 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:46.244 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:46.244 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19772 00:12:46.244 [2024-12-05 14:02:52.490473] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19772: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:46.244 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:46.244 { 00:12:46.244 "nqn": "nqn.2016-06.io.spdk:cnode19772", 00:12:46.244 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:46.244 "method": "nvmf_create_subsystem", 00:12:46.244 "req_id": 1 00:12:46.244 } 00:12:46.244 Got JSON-RPC error response 00:12:46.244 response: 00:12:46.244 { 00:12:46.244 "code": -32602, 00:12:46.244 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:46.244 }' 00:12:46.244 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:46.244 { 00:12:46.244 "nqn": "nqn.2016-06.io.spdk:cnode19772", 00:12:46.245 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:46.245 "method": "nvmf_create_subsystem", 00:12:46.245 "req_id": 1 00:12:46.245 } 00:12:46.245 Got JSON-RPC error response 00:12:46.245 response: 00:12:46.245 { 00:12:46.245 "code": -32602, 00:12:46.245 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:46.245 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:46.245 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:46.245 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28039 00:12:46.506 [2024-12-05 14:02:52.699171] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28039: invalid model number 'SPDK_Controller' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:46.506 { 00:12:46.506 "nqn": "nqn.2016-06.io.spdk:cnode28039", 00:12:46.506 "model_number": "SPDK_Controller\u001f", 00:12:46.506 "method": "nvmf_create_subsystem", 00:12:46.506 "req_id": 1 00:12:46.506 } 00:12:46.506 Got JSON-RPC error response 00:12:46.506 response: 00:12:46.506 { 00:12:46.506 "code": -32602, 00:12:46.506 "message": "Invalid MN SPDK_Controller\u001f" 00:12:46.506 }' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:46.506 { 00:12:46.506 "nqn": "nqn.2016-06.io.spdk:cnode28039", 00:12:46.506 "model_number": "SPDK_Controller\u001f", 00:12:46.506 "method": "nvmf_create_subsystem", 00:12:46.506 "req_id": 1 00:12:46.506 } 00:12:46.506 Got JSON-RPC error response 00:12:46.506 response: 00:12:46.506 { 00:12:46.506 "code": -32602, 00:12:46.506 "message": "Invalid MN SPDK_Controller\u001f" 00:12:46.506 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.506 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '$7|E5glS$+xS0N>~aiZ2.' 00:12:46.767 14:02:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '$7|E5glS$+xS0N>~aiZ2.' nqn.2016-06.io.spdk:cnode24990 00:12:47.029 [2024-12-05 14:02:53.080597] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24990: invalid serial number '$7|E5glS$+xS0N>~aiZ2.' 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:47.029 { 00:12:47.029 "nqn": "nqn.2016-06.io.spdk:cnode24990", 00:12:47.029 "serial_number": "$7|E5glS$+xS0N>~aiZ2.", 00:12:47.029 "method": "nvmf_create_subsystem", 00:12:47.029 "req_id": 1 00:12:47.029 } 00:12:47.029 Got JSON-RPC error response 00:12:47.029 response: 00:12:47.029 { 00:12:47.029 "code": -32602, 00:12:47.029 "message": "Invalid SN $7|E5glS$+xS0N>~aiZ2." 00:12:47.029 }' 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:47.029 { 00:12:47.029 "nqn": "nqn.2016-06.io.spdk:cnode24990", 00:12:47.029 "serial_number": "$7|E5glS$+xS0N>~aiZ2.", 00:12:47.029 "method": "nvmf_create_subsystem", 00:12:47.029 "req_id": 1 00:12:47.029 } 00:12:47.029 Got JSON-RPC error response 00:12:47.029 response: 00:12:47.029 { 00:12:47.029 "code": -32602, 00:12:47.029 "message": "Invalid SN $7|E5glS$+xS0N>~aiZ2." 00:12:47.029 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.029 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.030 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.031 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.292 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kRd`{&\+>#u`Wg!4+oc'\''bRxPsq4DgMBrcl8HK;^:I' 00:12:47.293 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'kRd`{&\+>#u`Wg!4+oc'\''bRxPsq4DgMBrcl8HK;^:I' nqn.2016-06.io.spdk:cnode26196 00:12:47.553 [2024-12-05 14:02:53.606449] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26196: invalid model number 'kRd`{&\+>#u`Wg!4+oc'bRxPsq4DgMBrcl8HK;^:I' 00:12:47.553 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:47.553 { 00:12:47.553 "nqn": "nqn.2016-06.io.spdk:cnode26196", 00:12:47.553 "model_number": "kRd`{&\\+>#u`Wg!4+oc'\''bRxPsq4DgMBrcl8HK;^:I", 00:12:47.553 "method": "nvmf_create_subsystem", 00:12:47.553 "req_id": 1 00:12:47.553 } 00:12:47.553 Got JSON-RPC error response 00:12:47.553 response: 00:12:47.553 { 00:12:47.553 "code": -32602, 00:12:47.553 "message": "Invalid MN kRd`{&\\+>#u`Wg!4+oc'\''bRxPsq4DgMBrcl8HK;^:I" 00:12:47.553 }' 00:12:47.553 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:47.553 { 00:12:47.553 "nqn": "nqn.2016-06.io.spdk:cnode26196", 00:12:47.553 "model_number": "kRd`{&\\+>#u`Wg!4+oc'bRxPsq4DgMBrcl8HK;^:I", 00:12:47.553 "method": "nvmf_create_subsystem", 00:12:47.553 "req_id": 1 00:12:47.553 } 00:12:47.553 Got JSON-RPC error response 00:12:47.553 response: 00:12:47.553 { 00:12:47.553 "code": -32602, 00:12:47.553 "message": "Invalid MN kRd`{&\\+>#u`Wg!4+oc'bRxPsq4DgMBrcl8HK;^:I" 00:12:47.553 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:47.553 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:47.553 [2024-12-05 14:02:53.791172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.553 14:02:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:47.813 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:47.813 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:47.813 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:47.813 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:47.813 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:48.073 [2024-12-05 14:02:54.176352] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:48.073 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:48.073 { 00:12:48.073 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:48.073 "listen_address": { 00:12:48.073 "trtype": "tcp", 00:12:48.073 "traddr": "", 00:12:48.073 "trsvcid": "4421" 00:12:48.073 }, 00:12:48.073 "method": "nvmf_subsystem_remove_listener", 00:12:48.073 "req_id": 1 00:12:48.073 } 00:12:48.073 Got JSON-RPC error response 00:12:48.073 response: 00:12:48.073 { 00:12:48.073 "code": -32602, 00:12:48.073 "message": "Invalid parameters" 00:12:48.073 }' 00:12:48.073 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:48.073 { 00:12:48.073 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:48.073 "listen_address": { 00:12:48.073 "trtype": "tcp", 00:12:48.073 "traddr": "", 00:12:48.073 "trsvcid": "4421" 00:12:48.073 }, 00:12:48.073 "method": "nvmf_subsystem_remove_listener", 00:12:48.073 "req_id": 1 00:12:48.073 } 00:12:48.073 Got JSON-RPC error response 00:12:48.073 response: 00:12:48.073 { 00:12:48.073 "code": -32602, 00:12:48.073 "message": "Invalid parameters" 00:12:48.073 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:48.073 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15849 -i 0 00:12:48.073 [2024-12-05 14:02:54.364911] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15849: invalid cntlid range [0-65519] 00:12:48.333 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:48.333 { 00:12:48.333 "nqn": "nqn.2016-06.io.spdk:cnode15849", 00:12:48.333 "min_cntlid": 0, 00:12:48.333 "method": "nvmf_create_subsystem", 00:12:48.333 "req_id": 1 00:12:48.333 } 00:12:48.333 Got JSON-RPC error response 00:12:48.333 response: 00:12:48.333 { 00:12:48.333 "code": -32602, 00:12:48.333 "message": "Invalid cntlid range [0-65519]" 00:12:48.333 }' 00:12:48.333 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:48.333 { 00:12:48.333 "nqn": "nqn.2016-06.io.spdk:cnode15849", 00:12:48.333 "min_cntlid": 0, 00:12:48.333 "method": "nvmf_create_subsystem", 00:12:48.333 "req_id": 1 00:12:48.333 } 00:12:48.333 Got JSON-RPC error response 00:12:48.333 response: 00:12:48.333 { 00:12:48.333 "code": -32602, 00:12:48.333 "message": "Invalid cntlid range [0-65519]" 00:12:48.333 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.333 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9359 -i 65520 00:12:48.333 [2024-12-05 14:02:54.553562] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9359: invalid cntlid range [65520-65519] 00:12:48.333 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:48.333 { 00:12:48.333 "nqn": "nqn.2016-06.io.spdk:cnode9359", 00:12:48.333 "min_cntlid": 65520, 00:12:48.333 "method": "nvmf_create_subsystem", 00:12:48.333 "req_id": 1 00:12:48.333 } 00:12:48.333 Got JSON-RPC error response 00:12:48.333 response: 00:12:48.333 { 00:12:48.333 "code": -32602, 00:12:48.333 "message": "Invalid cntlid range [65520-65519]" 00:12:48.333 }' 00:12:48.333 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:48.333 { 00:12:48.333 "nqn": "nqn.2016-06.io.spdk:cnode9359", 00:12:48.333 "min_cntlid": 65520, 00:12:48.333 "method": "nvmf_create_subsystem", 00:12:48.333 "req_id": 1 00:12:48.333 } 00:12:48.333 Got JSON-RPC error response 00:12:48.333 response: 00:12:48.333 { 00:12:48.333 "code": -32602, 00:12:48.333 "message": "Invalid cntlid range [65520-65519]" 00:12:48.333 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.333 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28026 -I 0 00:12:48.594 [2024-12-05 14:02:54.742119] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28026: invalid cntlid range [1-0] 00:12:48.594 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:48.594 { 00:12:48.594 "nqn": "nqn.2016-06.io.spdk:cnode28026", 00:12:48.594 "max_cntlid": 0, 00:12:48.594 "method": "nvmf_create_subsystem", 00:12:48.594 "req_id": 1 00:12:48.594 } 00:12:48.594 Got JSON-RPC error response 00:12:48.594 response: 00:12:48.594 { 00:12:48.594 "code": -32602, 00:12:48.594 "message": "Invalid cntlid range [1-0]" 00:12:48.594 }' 00:12:48.594 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:48.594 { 00:12:48.594 "nqn": "nqn.2016-06.io.spdk:cnode28026", 00:12:48.594 "max_cntlid": 0, 00:12:48.594 "method": "nvmf_create_subsystem", 00:12:48.594 "req_id": 1 00:12:48.594 } 00:12:48.594 Got JSON-RPC error response 00:12:48.594 response: 00:12:48.594 { 00:12:48.594 "code": -32602, 00:12:48.594 "message": "Invalid cntlid range [1-0]" 00:12:48.594 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.594 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24062 -I 65520 00:12:48.855 [2024-12-05 14:02:54.926697] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24062: invalid cntlid range [1-65520] 00:12:48.855 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:48.855 { 00:12:48.855 "nqn": "nqn.2016-06.io.spdk:cnode24062", 00:12:48.855 "max_cntlid": 65520, 00:12:48.855 "method": "nvmf_create_subsystem", 00:12:48.855 "req_id": 1 00:12:48.855 } 00:12:48.855 Got JSON-RPC error response 00:12:48.855 response: 00:12:48.855 { 00:12:48.855 "code": -32602, 00:12:48.855 "message": "Invalid cntlid range [1-65520]" 00:12:48.855 }' 00:12:48.855 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:48.855 { 00:12:48.855 "nqn": "nqn.2016-06.io.spdk:cnode24062", 00:12:48.855 "max_cntlid": 65520, 00:12:48.855 "method": "nvmf_create_subsystem", 00:12:48.855 "req_id": 1 00:12:48.855 } 00:12:48.855 Got JSON-RPC error response 00:12:48.855 response: 00:12:48.855 { 00:12:48.855 "code": -32602, 00:12:48.855 "message": "Invalid cntlid range [1-65520]" 00:12:48.855 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.855 14:02:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15808 -i 6 -I 5 00:12:48.855 [2024-12-05 14:02:55.115299] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15808: invalid cntlid range [6-5] 00:12:48.855 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:48.855 { 00:12:48.855 "nqn": "nqn.2016-06.io.spdk:cnode15808", 00:12:48.855 "min_cntlid": 6, 00:12:48.855 "max_cntlid": 5, 00:12:48.855 "method": "nvmf_create_subsystem", 00:12:48.855 "req_id": 1 00:12:48.855 } 00:12:48.855 Got JSON-RPC error response 00:12:48.855 response: 00:12:48.855 { 00:12:48.855 "code": -32602, 00:12:48.855 "message": "Invalid cntlid range [6-5]" 00:12:48.855 }' 00:12:48.855 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:48.855 { 00:12:48.855 "nqn": "nqn.2016-06.io.spdk:cnode15808", 00:12:48.855 "min_cntlid": 6, 00:12:48.855 "max_cntlid": 5, 00:12:48.855 "method": "nvmf_create_subsystem", 00:12:48.855 "req_id": 1 00:12:48.855 } 00:12:48.855 Got JSON-RPC error response 00:12:48.855 response: 00:12:48.855 { 00:12:48.855 "code": -32602, 00:12:48.855 "message": "Invalid cntlid range [6-5]" 00:12:48.855 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.855 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:49.115 { 00:12:49.115 "name": "foobar", 00:12:49.115 "method": "nvmf_delete_target", 00:12:49.115 "req_id": 1 00:12:49.115 } 00:12:49.115 Got JSON-RPC error response 00:12:49.115 response: 00:12:49.115 { 00:12:49.115 "code": -32602, 00:12:49.115 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:49.115 }' 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:49.115 { 00:12:49.115 "name": "foobar", 00:12:49.115 "method": "nvmf_delete_target", 00:12:49.115 "req_id": 1 00:12:49.115 } 00:12:49.115 Got JSON-RPC error response 00:12:49.115 response: 00:12:49.115 { 00:12:49.115 "code": -32602, 00:12:49.115 "message": "The specified target doesn't exist, cannot delete it." 00:12:49.115 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.115 rmmod nvme_tcp 00:12:49.115 rmmod nvme_fabrics 00:12:49.115 rmmod nvme_keyring 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2650173 ']' 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2650173 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2650173 ']' 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2650173 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2650173 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2650173' 00:12:49.115 killing process with pid 2650173 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2650173 00:12:49.115 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2650173 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.376 14:02:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.288 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.288 00:12:51.288 real 0m14.108s 00:12:51.288 user 0m20.930s 00:12:51.288 sys 0m6.787s 00:12:51.288 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.288 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:51.288 ************************************ 00:12:51.288 END TEST nvmf_invalid 00:12:51.288 ************************************ 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.549 ************************************ 00:12:51.549 START TEST nvmf_connect_stress 00:12:51.549 ************************************ 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:51.549 * Looking for test storage... 00:12:51.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:51.549 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.811 --rc genhtml_branch_coverage=1 00:12:51.811 --rc genhtml_function_coverage=1 00:12:51.811 --rc genhtml_legend=1 00:12:51.811 --rc geninfo_all_blocks=1 00:12:51.811 --rc geninfo_unexecuted_blocks=1 00:12:51.811 00:12:51.811 ' 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.811 --rc genhtml_branch_coverage=1 00:12:51.811 --rc genhtml_function_coverage=1 00:12:51.811 --rc genhtml_legend=1 00:12:51.811 --rc geninfo_all_blocks=1 00:12:51.811 --rc geninfo_unexecuted_blocks=1 00:12:51.811 00:12:51.811 ' 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.811 --rc genhtml_branch_coverage=1 00:12:51.811 --rc genhtml_function_coverage=1 00:12:51.811 --rc genhtml_legend=1 00:12:51.811 --rc geninfo_all_blocks=1 00:12:51.811 --rc geninfo_unexecuted_blocks=1 00:12:51.811 00:12:51.811 ' 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:51.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.811 --rc genhtml_branch_coverage=1 00:12:51.811 --rc genhtml_function_coverage=1 00:12:51.811 --rc genhtml_legend=1 00:12:51.811 --rc geninfo_all_blocks=1 00:12:51.811 --rc geninfo_unexecuted_blocks=1 00:12:51.811 00:12:51.811 ' 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.811 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.812 14:02:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:59.950 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:59.950 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.950 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:59.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:59.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.951 14:03:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:12:59.951 00:12:59.951 --- 10.0.0.2 ping statistics --- 00:12:59.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.951 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:12:59.951 00:12:59.951 --- 10.0.0.1 ping statistics --- 00:12:59.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.951 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2655463 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2655463 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2655463 ']' 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.951 14:03:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.951 [2024-12-05 14:03:05.329692] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:12:59.951 [2024-12-05 14:03:05.329752] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.951 [2024-12-05 14:03:05.430360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:59.951 [2024-12-05 14:03:05.481933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.951 [2024-12-05 14:03:05.481983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.951 [2024-12-05 14:03:05.481991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.951 [2024-12-05 14:03:05.481998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.951 [2024-12-05 14:03:05.482005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.951 [2024-12-05 14:03:05.484121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.951 [2024-12-05 14:03:05.484267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.951 [2024-12-05 14:03:05.484268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.951 [2024-12-05 14:03:06.216402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.951 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.952 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.952 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.952 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:59.952 [2024-12-05 14:03:06.242003] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.952 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.952 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.212 NULL1 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2655518 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.212 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.213 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.472 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.472 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:00.472 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.472 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.472 14:03:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:00.733 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.993 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:00.993 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:00.993 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.993 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.254 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.254 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:01.254 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.254 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.254 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.515 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:01.515 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.515 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.515 14:03:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.775 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.775 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:01.775 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:01.775 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.775 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.036 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.036 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:02.036 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.036 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.036 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.606 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.606 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:02.606 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.606 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.606 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.867 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.867 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:02.867 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.867 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.867 14:03:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.127 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.127 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:03.127 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.127 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.127 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.388 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.388 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:03.388 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.388 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.388 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.961 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.961 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:03.961 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.961 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.961 14:03:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.223 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.223 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:04.223 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.223 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.223 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.484 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.484 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:04.484 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.484 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.484 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.745 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.745 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:04.745 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.745 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.745 14:03:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.006 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.006 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:05.006 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.006 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.006 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.579 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.579 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:05.579 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.579 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.579 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.840 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.840 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:05.840 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.840 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.840 14:03:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.101 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.101 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:06.101 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.101 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.101 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.362 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.362 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:06.362 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.362 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.362 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.622 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.622 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:06.622 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.623 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.623 14:03:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.193 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.193 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:07.193 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.193 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.193 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.454 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.454 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:07.454 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.454 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.454 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.759 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.759 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:07.759 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.759 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.759 14:03:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.093 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.093 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:08.093 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.093 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.093 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.399 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.399 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:08.399 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.399 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.399 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.664 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.664 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:08.664 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.664 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.664 14:03:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.939 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.939 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:08.939 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.939 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.939 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.510 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.510 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:09.510 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.510 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.510 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.770 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.770 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:09.770 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.770 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.770 14:03:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.031 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.031 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:10.031 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.031 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.031 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.292 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2655518 00:13:10.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2655518) - No such process 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2655518 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.292 rmmod nvme_tcp 00:13:10.292 rmmod nvme_fabrics 00:13:10.292 rmmod nvme_keyring 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2655463 ']' 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2655463 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2655463 ']' 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2655463 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.292 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655463 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655463' 00:13:10.553 killing process with pid 2655463 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2655463 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2655463 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.553 14:03:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.099 00:13:13.099 real 0m21.152s 00:13:13.099 user 0m42.295s 00:13:13.099 sys 0m9.199s 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 ************************************ 00:13:13.099 END TEST nvmf_connect_stress 00:13:13.099 ************************************ 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 ************************************ 00:13:13.099 START TEST nvmf_fused_ordering 00:13:13.099 ************************************ 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:13.099 * Looking for test storage... 00:13:13.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:13.099 14:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:13.099 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:13.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.100 --rc genhtml_branch_coverage=1 00:13:13.100 --rc genhtml_function_coverage=1 00:13:13.100 --rc genhtml_legend=1 00:13:13.100 --rc geninfo_all_blocks=1 00:13:13.100 --rc geninfo_unexecuted_blocks=1 00:13:13.100 00:13:13.100 ' 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:13.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.100 --rc genhtml_branch_coverage=1 00:13:13.100 --rc genhtml_function_coverage=1 00:13:13.100 --rc genhtml_legend=1 00:13:13.100 --rc geninfo_all_blocks=1 00:13:13.100 --rc geninfo_unexecuted_blocks=1 00:13:13.100 00:13:13.100 ' 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:13.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.100 --rc genhtml_branch_coverage=1 00:13:13.100 --rc genhtml_function_coverage=1 00:13:13.100 --rc genhtml_legend=1 00:13:13.100 --rc geninfo_all_blocks=1 00:13:13.100 --rc geninfo_unexecuted_blocks=1 00:13:13.100 00:13:13.100 ' 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:13.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.100 --rc genhtml_branch_coverage=1 00:13:13.100 --rc genhtml_function_coverage=1 00:13:13.100 --rc genhtml_legend=1 00:13:13.100 --rc geninfo_all_blocks=1 00:13:13.100 --rc geninfo_unexecuted_blocks=1 00:13:13.100 00:13:13.100 ' 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.100 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.101 14:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:21.245 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:21.245 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:21.245 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:21.245 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.245 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.703 ms 00:13:21.245 00:13:21.245 --- 10.0.0.2 ping statistics --- 00:13:21.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.245 rtt min/avg/max/mdev = 0.703/0.703/0.703/0.000 ms 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:13:21.246 00:13:21.246 --- 10.0.0.1 ping statistics --- 00:13:21.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.246 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2662313 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2662313 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2662313 ']' 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.246 14:03:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.246 [2024-12-05 14:03:26.631668] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:13:21.246 [2024-12-05 14:03:26.631737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.246 [2024-12-05 14:03:26.732038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.246 [2024-12-05 14:03:26.782091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.246 [2024-12-05 14:03:26.782145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.246 [2024-12-05 14:03:26.782154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.246 [2024-12-05 14:03:26.782162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.246 [2024-12-05 14:03:26.782168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.246 [2024-12-05 14:03:26.782953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.246 [2024-12-05 14:03:27.509503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.246 [2024-12-05 14:03:27.533804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.246 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.507 NULL1 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.507 14:03:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:21.507 [2024-12-05 14:03:27.603790] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:13:21.507 [2024-12-05 14:03:27.603836] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662484 ] 00:13:21.769 Attached to nqn.2016-06.io.spdk:cnode1 00:13:21.769 Namespace ID: 1 size: 1GB 00:13:21.769 fused_ordering(0) 00:13:21.769 fused_ordering(1) 00:13:21.769 fused_ordering(2) 00:13:21.769 fused_ordering(3) 00:13:21.769 fused_ordering(4) 00:13:21.769 fused_ordering(5) 00:13:21.769 fused_ordering(6) 00:13:21.769 fused_ordering(7) 00:13:21.769 fused_ordering(8) 00:13:21.769 fused_ordering(9) 00:13:21.769 fused_ordering(10) 00:13:21.769 fused_ordering(11) 00:13:21.769 fused_ordering(12) 00:13:21.769 fused_ordering(13) 00:13:21.770 fused_ordering(14) 00:13:21.770 fused_ordering(15) 00:13:21.770 fused_ordering(16) 00:13:21.770 fused_ordering(17) 00:13:21.770 fused_ordering(18) 00:13:21.770 fused_ordering(19) 00:13:21.770 fused_ordering(20) 00:13:21.770 fused_ordering(21) 00:13:21.770 fused_ordering(22) 00:13:21.770 fused_ordering(23) 00:13:21.770 fused_ordering(24) 00:13:21.770 fused_ordering(25) 00:13:21.770 fused_ordering(26) 00:13:21.770 fused_ordering(27) 00:13:21.770 fused_ordering(28) 00:13:21.770 fused_ordering(29) 00:13:21.770 fused_ordering(30) 00:13:21.770 fused_ordering(31) 00:13:21.770 fused_ordering(32) 00:13:21.770 fused_ordering(33) 00:13:21.770 fused_ordering(34) 00:13:21.770 fused_ordering(35) 00:13:21.770 fused_ordering(36) 00:13:21.770 fused_ordering(37) 00:13:21.770 fused_ordering(38) 00:13:21.770 fused_ordering(39) 00:13:21.770 fused_ordering(40) 00:13:21.770 fused_ordering(41) 00:13:21.770 fused_ordering(42) 00:13:21.770 fused_ordering(43) 00:13:21.770 fused_ordering(44) 00:13:21.770 fused_ordering(45) 00:13:21.770 fused_ordering(46) 00:13:21.770 fused_ordering(47) 00:13:21.770 fused_ordering(48) 00:13:21.770 fused_ordering(49) 00:13:21.770 fused_ordering(50) 00:13:21.770 fused_ordering(51) 00:13:21.770 fused_ordering(52) 00:13:21.770 fused_ordering(53) 00:13:21.770 fused_ordering(54) 00:13:21.770 fused_ordering(55) 00:13:21.770 fused_ordering(56) 00:13:21.770 fused_ordering(57) 00:13:21.770 fused_ordering(58) 00:13:21.770 fused_ordering(59) 00:13:21.770 fused_ordering(60) 00:13:21.770 fused_ordering(61) 00:13:21.770 fused_ordering(62) 00:13:21.770 fused_ordering(63) 00:13:21.770 fused_ordering(64) 00:13:21.770 fused_ordering(65) 00:13:21.770 fused_ordering(66) 00:13:21.770 fused_ordering(67) 00:13:21.770 fused_ordering(68) 00:13:21.770 fused_ordering(69) 00:13:21.770 fused_ordering(70) 00:13:21.770 fused_ordering(71) 00:13:21.770 fused_ordering(72) 00:13:21.770 fused_ordering(73) 00:13:21.770 fused_ordering(74) 00:13:21.770 fused_ordering(75) 00:13:21.770 fused_ordering(76) 00:13:21.770 fused_ordering(77) 00:13:21.770 fused_ordering(78) 00:13:21.770 fused_ordering(79) 00:13:21.770 fused_ordering(80) 00:13:21.770 fused_ordering(81) 00:13:21.770 fused_ordering(82) 00:13:21.770 fused_ordering(83) 00:13:21.770 fused_ordering(84) 00:13:21.770 fused_ordering(85) 00:13:21.770 fused_ordering(86) 00:13:21.770 fused_ordering(87) 00:13:21.770 fused_ordering(88) 00:13:21.770 fused_ordering(89) 00:13:21.770 fused_ordering(90) 00:13:21.770 fused_ordering(91) 00:13:21.770 fused_ordering(92) 00:13:21.770 fused_ordering(93) 00:13:21.770 fused_ordering(94) 00:13:21.770 fused_ordering(95) 00:13:21.770 fused_ordering(96) 00:13:21.770 fused_ordering(97) 00:13:21.770 fused_ordering(98) 00:13:21.770 fused_ordering(99) 00:13:21.770 fused_ordering(100) 00:13:21.770 fused_ordering(101) 00:13:21.770 fused_ordering(102) 00:13:21.770 fused_ordering(103) 00:13:21.770 fused_ordering(104) 00:13:21.770 fused_ordering(105) 00:13:21.770 fused_ordering(106) 00:13:21.770 fused_ordering(107) 00:13:21.770 fused_ordering(108) 00:13:21.770 fused_ordering(109) 00:13:21.770 fused_ordering(110) 00:13:21.770 fused_ordering(111) 00:13:21.770 fused_ordering(112) 00:13:21.770 fused_ordering(113) 00:13:21.770 fused_ordering(114) 00:13:21.770 fused_ordering(115) 00:13:21.770 fused_ordering(116) 00:13:21.770 fused_ordering(117) 00:13:21.770 fused_ordering(118) 00:13:21.770 fused_ordering(119) 00:13:21.770 fused_ordering(120) 00:13:21.770 fused_ordering(121) 00:13:21.770 fused_ordering(122) 00:13:21.770 fused_ordering(123) 00:13:21.770 fused_ordering(124) 00:13:21.770 fused_ordering(125) 00:13:21.770 fused_ordering(126) 00:13:21.770 fused_ordering(127) 00:13:21.770 fused_ordering(128) 00:13:21.770 fused_ordering(129) 00:13:21.770 fused_ordering(130) 00:13:21.770 fused_ordering(131) 00:13:21.770 fused_ordering(132) 00:13:21.770 fused_ordering(133) 00:13:21.770 fused_ordering(134) 00:13:21.770 fused_ordering(135) 00:13:21.770 fused_ordering(136) 00:13:21.770 fused_ordering(137) 00:13:21.770 fused_ordering(138) 00:13:21.770 fused_ordering(139) 00:13:21.770 fused_ordering(140) 00:13:21.770 fused_ordering(141) 00:13:21.770 fused_ordering(142) 00:13:21.770 fused_ordering(143) 00:13:21.770 fused_ordering(144) 00:13:21.770 fused_ordering(145) 00:13:21.770 fused_ordering(146) 00:13:21.770 fused_ordering(147) 00:13:21.770 fused_ordering(148) 00:13:21.770 fused_ordering(149) 00:13:21.770 fused_ordering(150) 00:13:21.770 fused_ordering(151) 00:13:21.770 fused_ordering(152) 00:13:21.770 fused_ordering(153) 00:13:21.770 fused_ordering(154) 00:13:21.770 fused_ordering(155) 00:13:21.770 fused_ordering(156) 00:13:21.770 fused_ordering(157) 00:13:21.770 fused_ordering(158) 00:13:21.770 fused_ordering(159) 00:13:21.770 fused_ordering(160) 00:13:21.770 fused_ordering(161) 00:13:21.770 fused_ordering(162) 00:13:21.770 fused_ordering(163) 00:13:21.770 fused_ordering(164) 00:13:21.770 fused_ordering(165) 00:13:21.770 fused_ordering(166) 00:13:21.770 fused_ordering(167) 00:13:21.770 fused_ordering(168) 00:13:21.770 fused_ordering(169) 00:13:21.770 fused_ordering(170) 00:13:21.770 fused_ordering(171) 00:13:21.770 fused_ordering(172) 00:13:21.770 fused_ordering(173) 00:13:21.770 fused_ordering(174) 00:13:21.770 fused_ordering(175) 00:13:21.770 fused_ordering(176) 00:13:21.770 fused_ordering(177) 00:13:21.770 fused_ordering(178) 00:13:21.770 fused_ordering(179) 00:13:21.770 fused_ordering(180) 00:13:21.770 fused_ordering(181) 00:13:21.770 fused_ordering(182) 00:13:21.770 fused_ordering(183) 00:13:21.770 fused_ordering(184) 00:13:21.770 fused_ordering(185) 00:13:21.770 fused_ordering(186) 00:13:21.770 fused_ordering(187) 00:13:21.770 fused_ordering(188) 00:13:21.770 fused_ordering(189) 00:13:21.770 fused_ordering(190) 00:13:21.770 fused_ordering(191) 00:13:21.770 fused_ordering(192) 00:13:21.770 fused_ordering(193) 00:13:21.770 fused_ordering(194) 00:13:21.770 fused_ordering(195) 00:13:21.770 fused_ordering(196) 00:13:21.770 fused_ordering(197) 00:13:21.770 fused_ordering(198) 00:13:21.770 fused_ordering(199) 00:13:21.770 fused_ordering(200) 00:13:21.770 fused_ordering(201) 00:13:21.770 fused_ordering(202) 00:13:21.770 fused_ordering(203) 00:13:21.770 fused_ordering(204) 00:13:21.770 fused_ordering(205) 00:13:22.342 fused_ordering(206) 00:13:22.342 fused_ordering(207) 00:13:22.342 fused_ordering(208) 00:13:22.342 fused_ordering(209) 00:13:22.342 fused_ordering(210) 00:13:22.342 fused_ordering(211) 00:13:22.342 fused_ordering(212) 00:13:22.343 fused_ordering(213) 00:13:22.343 fused_ordering(214) 00:13:22.343 fused_ordering(215) 00:13:22.343 fused_ordering(216) 00:13:22.343 fused_ordering(217) 00:13:22.343 fused_ordering(218) 00:13:22.343 fused_ordering(219) 00:13:22.343 fused_ordering(220) 00:13:22.343 fused_ordering(221) 00:13:22.343 fused_ordering(222) 00:13:22.343 fused_ordering(223) 00:13:22.343 fused_ordering(224) 00:13:22.343 fused_ordering(225) 00:13:22.343 fused_ordering(226) 00:13:22.343 fused_ordering(227) 00:13:22.343 fused_ordering(228) 00:13:22.343 fused_ordering(229) 00:13:22.343 fused_ordering(230) 00:13:22.343 fused_ordering(231) 00:13:22.343 fused_ordering(232) 00:13:22.343 fused_ordering(233) 00:13:22.343 fused_ordering(234) 00:13:22.343 fused_ordering(235) 00:13:22.343 fused_ordering(236) 00:13:22.343 fused_ordering(237) 00:13:22.343 fused_ordering(238) 00:13:22.343 fused_ordering(239) 00:13:22.343 fused_ordering(240) 00:13:22.343 fused_ordering(241) 00:13:22.343 fused_ordering(242) 00:13:22.343 fused_ordering(243) 00:13:22.343 fused_ordering(244) 00:13:22.343 fused_ordering(245) 00:13:22.343 fused_ordering(246) 00:13:22.343 fused_ordering(247) 00:13:22.343 fused_ordering(248) 00:13:22.343 fused_ordering(249) 00:13:22.343 fused_ordering(250) 00:13:22.343 fused_ordering(251) 00:13:22.343 fused_ordering(252) 00:13:22.343 fused_ordering(253) 00:13:22.343 fused_ordering(254) 00:13:22.343 fused_ordering(255) 00:13:22.343 fused_ordering(256) 00:13:22.343 fused_ordering(257) 00:13:22.343 fused_ordering(258) 00:13:22.343 fused_ordering(259) 00:13:22.343 fused_ordering(260) 00:13:22.343 fused_ordering(261) 00:13:22.343 fused_ordering(262) 00:13:22.343 fused_ordering(263) 00:13:22.343 fused_ordering(264) 00:13:22.343 fused_ordering(265) 00:13:22.343 fused_ordering(266) 00:13:22.343 fused_ordering(267) 00:13:22.343 fused_ordering(268) 00:13:22.343 fused_ordering(269) 00:13:22.343 fused_ordering(270) 00:13:22.343 fused_ordering(271) 00:13:22.343 fused_ordering(272) 00:13:22.343 fused_ordering(273) 00:13:22.343 fused_ordering(274) 00:13:22.343 fused_ordering(275) 00:13:22.343 fused_ordering(276) 00:13:22.343 fused_ordering(277) 00:13:22.343 fused_ordering(278) 00:13:22.343 fused_ordering(279) 00:13:22.343 fused_ordering(280) 00:13:22.343 fused_ordering(281) 00:13:22.343 fused_ordering(282) 00:13:22.343 fused_ordering(283) 00:13:22.343 fused_ordering(284) 00:13:22.343 fused_ordering(285) 00:13:22.343 fused_ordering(286) 00:13:22.343 fused_ordering(287) 00:13:22.343 fused_ordering(288) 00:13:22.343 fused_ordering(289) 00:13:22.343 fused_ordering(290) 00:13:22.343 fused_ordering(291) 00:13:22.343 fused_ordering(292) 00:13:22.343 fused_ordering(293) 00:13:22.343 fused_ordering(294) 00:13:22.343 fused_ordering(295) 00:13:22.343 fused_ordering(296) 00:13:22.343 fused_ordering(297) 00:13:22.343 fused_ordering(298) 00:13:22.343 fused_ordering(299) 00:13:22.343 fused_ordering(300) 00:13:22.343 fused_ordering(301) 00:13:22.343 fused_ordering(302) 00:13:22.343 fused_ordering(303) 00:13:22.343 fused_ordering(304) 00:13:22.343 fused_ordering(305) 00:13:22.343 fused_ordering(306) 00:13:22.343 fused_ordering(307) 00:13:22.343 fused_ordering(308) 00:13:22.343 fused_ordering(309) 00:13:22.343 fused_ordering(310) 00:13:22.343 fused_ordering(311) 00:13:22.343 fused_ordering(312) 00:13:22.343 fused_ordering(313) 00:13:22.343 fused_ordering(314) 00:13:22.343 fused_ordering(315) 00:13:22.343 fused_ordering(316) 00:13:22.343 fused_ordering(317) 00:13:22.343 fused_ordering(318) 00:13:22.343 fused_ordering(319) 00:13:22.343 fused_ordering(320) 00:13:22.343 fused_ordering(321) 00:13:22.343 fused_ordering(322) 00:13:22.343 fused_ordering(323) 00:13:22.343 fused_ordering(324) 00:13:22.343 fused_ordering(325) 00:13:22.343 fused_ordering(326) 00:13:22.343 fused_ordering(327) 00:13:22.343 fused_ordering(328) 00:13:22.343 fused_ordering(329) 00:13:22.343 fused_ordering(330) 00:13:22.343 fused_ordering(331) 00:13:22.343 fused_ordering(332) 00:13:22.343 fused_ordering(333) 00:13:22.343 fused_ordering(334) 00:13:22.343 fused_ordering(335) 00:13:22.343 fused_ordering(336) 00:13:22.343 fused_ordering(337) 00:13:22.343 fused_ordering(338) 00:13:22.343 fused_ordering(339) 00:13:22.343 fused_ordering(340) 00:13:22.343 fused_ordering(341) 00:13:22.343 fused_ordering(342) 00:13:22.343 fused_ordering(343) 00:13:22.343 fused_ordering(344) 00:13:22.343 fused_ordering(345) 00:13:22.343 fused_ordering(346) 00:13:22.343 fused_ordering(347) 00:13:22.343 fused_ordering(348) 00:13:22.343 fused_ordering(349) 00:13:22.343 fused_ordering(350) 00:13:22.343 fused_ordering(351) 00:13:22.343 fused_ordering(352) 00:13:22.343 fused_ordering(353) 00:13:22.343 fused_ordering(354) 00:13:22.343 fused_ordering(355) 00:13:22.343 fused_ordering(356) 00:13:22.343 fused_ordering(357) 00:13:22.343 fused_ordering(358) 00:13:22.343 fused_ordering(359) 00:13:22.343 fused_ordering(360) 00:13:22.343 fused_ordering(361) 00:13:22.343 fused_ordering(362) 00:13:22.343 fused_ordering(363) 00:13:22.343 fused_ordering(364) 00:13:22.343 fused_ordering(365) 00:13:22.343 fused_ordering(366) 00:13:22.343 fused_ordering(367) 00:13:22.343 fused_ordering(368) 00:13:22.343 fused_ordering(369) 00:13:22.343 fused_ordering(370) 00:13:22.343 fused_ordering(371) 00:13:22.343 fused_ordering(372) 00:13:22.343 fused_ordering(373) 00:13:22.343 fused_ordering(374) 00:13:22.343 fused_ordering(375) 00:13:22.343 fused_ordering(376) 00:13:22.343 fused_ordering(377) 00:13:22.343 fused_ordering(378) 00:13:22.343 fused_ordering(379) 00:13:22.343 fused_ordering(380) 00:13:22.343 fused_ordering(381) 00:13:22.343 fused_ordering(382) 00:13:22.343 fused_ordering(383) 00:13:22.343 fused_ordering(384) 00:13:22.343 fused_ordering(385) 00:13:22.343 fused_ordering(386) 00:13:22.343 fused_ordering(387) 00:13:22.343 fused_ordering(388) 00:13:22.343 fused_ordering(389) 00:13:22.343 fused_ordering(390) 00:13:22.343 fused_ordering(391) 00:13:22.343 fused_ordering(392) 00:13:22.343 fused_ordering(393) 00:13:22.343 fused_ordering(394) 00:13:22.343 fused_ordering(395) 00:13:22.343 fused_ordering(396) 00:13:22.343 fused_ordering(397) 00:13:22.343 fused_ordering(398) 00:13:22.343 fused_ordering(399) 00:13:22.343 fused_ordering(400) 00:13:22.343 fused_ordering(401) 00:13:22.343 fused_ordering(402) 00:13:22.343 fused_ordering(403) 00:13:22.343 fused_ordering(404) 00:13:22.343 fused_ordering(405) 00:13:22.343 fused_ordering(406) 00:13:22.343 fused_ordering(407) 00:13:22.343 fused_ordering(408) 00:13:22.343 fused_ordering(409) 00:13:22.343 fused_ordering(410) 00:13:22.606 fused_ordering(411) 00:13:22.606 fused_ordering(412) 00:13:22.606 fused_ordering(413) 00:13:22.606 fused_ordering(414) 00:13:22.606 fused_ordering(415) 00:13:22.606 fused_ordering(416) 00:13:22.606 fused_ordering(417) 00:13:22.606 fused_ordering(418) 00:13:22.606 fused_ordering(419) 00:13:22.606 fused_ordering(420) 00:13:22.606 fused_ordering(421) 00:13:22.606 fused_ordering(422) 00:13:22.606 fused_ordering(423) 00:13:22.606 fused_ordering(424) 00:13:22.606 fused_ordering(425) 00:13:22.606 fused_ordering(426) 00:13:22.606 fused_ordering(427) 00:13:22.606 fused_ordering(428) 00:13:22.606 fused_ordering(429) 00:13:22.606 fused_ordering(430) 00:13:22.606 fused_ordering(431) 00:13:22.606 fused_ordering(432) 00:13:22.606 fused_ordering(433) 00:13:22.606 fused_ordering(434) 00:13:22.606 fused_ordering(435) 00:13:22.606 fused_ordering(436) 00:13:22.606 fused_ordering(437) 00:13:22.606 fused_ordering(438) 00:13:22.606 fused_ordering(439) 00:13:22.606 fused_ordering(440) 00:13:22.606 fused_ordering(441) 00:13:22.606 fused_ordering(442) 00:13:22.606 fused_ordering(443) 00:13:22.606 fused_ordering(444) 00:13:22.606 fused_ordering(445) 00:13:22.606 fused_ordering(446) 00:13:22.606 fused_ordering(447) 00:13:22.606 fused_ordering(448) 00:13:22.606 fused_ordering(449) 00:13:22.606 fused_ordering(450) 00:13:22.606 fused_ordering(451) 00:13:22.606 fused_ordering(452) 00:13:22.606 fused_ordering(453) 00:13:22.606 fused_ordering(454) 00:13:22.606 fused_ordering(455) 00:13:22.606 fused_ordering(456) 00:13:22.606 fused_ordering(457) 00:13:22.606 fused_ordering(458) 00:13:22.606 fused_ordering(459) 00:13:22.606 fused_ordering(460) 00:13:22.606 fused_ordering(461) 00:13:22.606 fused_ordering(462) 00:13:22.606 fused_ordering(463) 00:13:22.606 fused_ordering(464) 00:13:22.606 fused_ordering(465) 00:13:22.606 fused_ordering(466) 00:13:22.606 fused_ordering(467) 00:13:22.606 fused_ordering(468) 00:13:22.606 fused_ordering(469) 00:13:22.606 fused_ordering(470) 00:13:22.606 fused_ordering(471) 00:13:22.606 fused_ordering(472) 00:13:22.606 fused_ordering(473) 00:13:22.606 fused_ordering(474) 00:13:22.606 fused_ordering(475) 00:13:22.606 fused_ordering(476) 00:13:22.606 fused_ordering(477) 00:13:22.606 fused_ordering(478) 00:13:22.606 fused_ordering(479) 00:13:22.606 fused_ordering(480) 00:13:22.606 fused_ordering(481) 00:13:22.606 fused_ordering(482) 00:13:22.606 fused_ordering(483) 00:13:22.606 fused_ordering(484) 00:13:22.606 fused_ordering(485) 00:13:22.606 fused_ordering(486) 00:13:22.606 fused_ordering(487) 00:13:22.606 fused_ordering(488) 00:13:22.606 fused_ordering(489) 00:13:22.606 fused_ordering(490) 00:13:22.606 fused_ordering(491) 00:13:22.606 fused_ordering(492) 00:13:22.606 fused_ordering(493) 00:13:22.606 fused_ordering(494) 00:13:22.606 fused_ordering(495) 00:13:22.606 fused_ordering(496) 00:13:22.606 fused_ordering(497) 00:13:22.606 fused_ordering(498) 00:13:22.606 fused_ordering(499) 00:13:22.606 fused_ordering(500) 00:13:22.606 fused_ordering(501) 00:13:22.606 fused_ordering(502) 00:13:22.606 fused_ordering(503) 00:13:22.606 fused_ordering(504) 00:13:22.606 fused_ordering(505) 00:13:22.606 fused_ordering(506) 00:13:22.606 fused_ordering(507) 00:13:22.606 fused_ordering(508) 00:13:22.606 fused_ordering(509) 00:13:22.606 fused_ordering(510) 00:13:22.606 fused_ordering(511) 00:13:22.606 fused_ordering(512) 00:13:22.606 fused_ordering(513) 00:13:22.606 fused_ordering(514) 00:13:22.606 fused_ordering(515) 00:13:22.606 fused_ordering(516) 00:13:22.606 fused_ordering(517) 00:13:22.606 fused_ordering(518) 00:13:22.606 fused_ordering(519) 00:13:22.606 fused_ordering(520) 00:13:22.606 fused_ordering(521) 00:13:22.606 fused_ordering(522) 00:13:22.606 fused_ordering(523) 00:13:22.607 fused_ordering(524) 00:13:22.607 fused_ordering(525) 00:13:22.607 fused_ordering(526) 00:13:22.607 fused_ordering(527) 00:13:22.607 fused_ordering(528) 00:13:22.607 fused_ordering(529) 00:13:22.607 fused_ordering(530) 00:13:22.607 fused_ordering(531) 00:13:22.607 fused_ordering(532) 00:13:22.607 fused_ordering(533) 00:13:22.607 fused_ordering(534) 00:13:22.607 fused_ordering(535) 00:13:22.607 fused_ordering(536) 00:13:22.607 fused_ordering(537) 00:13:22.607 fused_ordering(538) 00:13:22.607 fused_ordering(539) 00:13:22.607 fused_ordering(540) 00:13:22.607 fused_ordering(541) 00:13:22.607 fused_ordering(542) 00:13:22.607 fused_ordering(543) 00:13:22.607 fused_ordering(544) 00:13:22.607 fused_ordering(545) 00:13:22.607 fused_ordering(546) 00:13:22.607 fused_ordering(547) 00:13:22.607 fused_ordering(548) 00:13:22.607 fused_ordering(549) 00:13:22.607 fused_ordering(550) 00:13:22.607 fused_ordering(551) 00:13:22.607 fused_ordering(552) 00:13:22.607 fused_ordering(553) 00:13:22.607 fused_ordering(554) 00:13:22.607 fused_ordering(555) 00:13:22.607 fused_ordering(556) 00:13:22.607 fused_ordering(557) 00:13:22.607 fused_ordering(558) 00:13:22.607 fused_ordering(559) 00:13:22.607 fused_ordering(560) 00:13:22.607 fused_ordering(561) 00:13:22.607 fused_ordering(562) 00:13:22.607 fused_ordering(563) 00:13:22.607 fused_ordering(564) 00:13:22.607 fused_ordering(565) 00:13:22.607 fused_ordering(566) 00:13:22.607 fused_ordering(567) 00:13:22.607 fused_ordering(568) 00:13:22.607 fused_ordering(569) 00:13:22.607 fused_ordering(570) 00:13:22.607 fused_ordering(571) 00:13:22.607 fused_ordering(572) 00:13:22.607 fused_ordering(573) 00:13:22.607 fused_ordering(574) 00:13:22.607 fused_ordering(575) 00:13:22.607 fused_ordering(576) 00:13:22.607 fused_ordering(577) 00:13:22.607 fused_ordering(578) 00:13:22.607 fused_ordering(579) 00:13:22.607 fused_ordering(580) 00:13:22.607 fused_ordering(581) 00:13:22.607 fused_ordering(582) 00:13:22.607 fused_ordering(583) 00:13:22.607 fused_ordering(584) 00:13:22.607 fused_ordering(585) 00:13:22.607 fused_ordering(586) 00:13:22.607 fused_ordering(587) 00:13:22.607 fused_ordering(588) 00:13:22.607 fused_ordering(589) 00:13:22.607 fused_ordering(590) 00:13:22.607 fused_ordering(591) 00:13:22.607 fused_ordering(592) 00:13:22.607 fused_ordering(593) 00:13:22.607 fused_ordering(594) 00:13:22.607 fused_ordering(595) 00:13:22.607 fused_ordering(596) 00:13:22.607 fused_ordering(597) 00:13:22.607 fused_ordering(598) 00:13:22.607 fused_ordering(599) 00:13:22.607 fused_ordering(600) 00:13:22.607 fused_ordering(601) 00:13:22.607 fused_ordering(602) 00:13:22.607 fused_ordering(603) 00:13:22.607 fused_ordering(604) 00:13:22.607 fused_ordering(605) 00:13:22.607 fused_ordering(606) 00:13:22.607 fused_ordering(607) 00:13:22.607 fused_ordering(608) 00:13:22.607 fused_ordering(609) 00:13:22.607 fused_ordering(610) 00:13:22.607 fused_ordering(611) 00:13:22.607 fused_ordering(612) 00:13:22.607 fused_ordering(613) 00:13:22.607 fused_ordering(614) 00:13:22.607 fused_ordering(615) 00:13:23.179 fused_ordering(616) 00:13:23.179 fused_ordering(617) 00:13:23.179 fused_ordering(618) 00:13:23.179 fused_ordering(619) 00:13:23.179 fused_ordering(620) 00:13:23.179 fused_ordering(621) 00:13:23.179 fused_ordering(622) 00:13:23.179 fused_ordering(623) 00:13:23.179 fused_ordering(624) 00:13:23.179 fused_ordering(625) 00:13:23.179 fused_ordering(626) 00:13:23.179 fused_ordering(627) 00:13:23.179 fused_ordering(628) 00:13:23.179 fused_ordering(629) 00:13:23.179 fused_ordering(630) 00:13:23.179 fused_ordering(631) 00:13:23.179 fused_ordering(632) 00:13:23.179 fused_ordering(633) 00:13:23.179 fused_ordering(634) 00:13:23.179 fused_ordering(635) 00:13:23.179 fused_ordering(636) 00:13:23.179 fused_ordering(637) 00:13:23.179 fused_ordering(638) 00:13:23.179 fused_ordering(639) 00:13:23.179 fused_ordering(640) 00:13:23.179 fused_ordering(641) 00:13:23.179 fused_ordering(642) 00:13:23.179 fused_ordering(643) 00:13:23.179 fused_ordering(644) 00:13:23.179 fused_ordering(645) 00:13:23.179 fused_ordering(646) 00:13:23.179 fused_ordering(647) 00:13:23.179 fused_ordering(648) 00:13:23.179 fused_ordering(649) 00:13:23.179 fused_ordering(650) 00:13:23.179 fused_ordering(651) 00:13:23.179 fused_ordering(652) 00:13:23.179 fused_ordering(653) 00:13:23.179 fused_ordering(654) 00:13:23.179 fused_ordering(655) 00:13:23.179 fused_ordering(656) 00:13:23.179 fused_ordering(657) 00:13:23.179 fused_ordering(658) 00:13:23.179 fused_ordering(659) 00:13:23.179 fused_ordering(660) 00:13:23.179 fused_ordering(661) 00:13:23.179 fused_ordering(662) 00:13:23.179 fused_ordering(663) 00:13:23.179 fused_ordering(664) 00:13:23.179 fused_ordering(665) 00:13:23.179 fused_ordering(666) 00:13:23.179 fused_ordering(667) 00:13:23.179 fused_ordering(668) 00:13:23.179 fused_ordering(669) 00:13:23.179 fused_ordering(670) 00:13:23.179 fused_ordering(671) 00:13:23.179 fused_ordering(672) 00:13:23.179 fused_ordering(673) 00:13:23.179 fused_ordering(674) 00:13:23.179 fused_ordering(675) 00:13:23.179 fused_ordering(676) 00:13:23.179 fused_ordering(677) 00:13:23.179 fused_ordering(678) 00:13:23.179 fused_ordering(679) 00:13:23.179 fused_ordering(680) 00:13:23.179 fused_ordering(681) 00:13:23.179 fused_ordering(682) 00:13:23.179 fused_ordering(683) 00:13:23.179 fused_ordering(684) 00:13:23.179 fused_ordering(685) 00:13:23.179 fused_ordering(686) 00:13:23.179 fused_ordering(687) 00:13:23.179 fused_ordering(688) 00:13:23.179 fused_ordering(689) 00:13:23.179 fused_ordering(690) 00:13:23.179 fused_ordering(691) 00:13:23.179 fused_ordering(692) 00:13:23.179 fused_ordering(693) 00:13:23.179 fused_ordering(694) 00:13:23.179 fused_ordering(695) 00:13:23.179 fused_ordering(696) 00:13:23.179 fused_ordering(697) 00:13:23.179 fused_ordering(698) 00:13:23.179 fused_ordering(699) 00:13:23.179 fused_ordering(700) 00:13:23.179 fused_ordering(701) 00:13:23.179 fused_ordering(702) 00:13:23.179 fused_ordering(703) 00:13:23.179 fused_ordering(704) 00:13:23.179 fused_ordering(705) 00:13:23.179 fused_ordering(706) 00:13:23.179 fused_ordering(707) 00:13:23.179 fused_ordering(708) 00:13:23.179 fused_ordering(709) 00:13:23.179 fused_ordering(710) 00:13:23.179 fused_ordering(711) 00:13:23.179 fused_ordering(712) 00:13:23.179 fused_ordering(713) 00:13:23.179 fused_ordering(714) 00:13:23.179 fused_ordering(715) 00:13:23.179 fused_ordering(716) 00:13:23.179 fused_ordering(717) 00:13:23.179 fused_ordering(718) 00:13:23.179 fused_ordering(719) 00:13:23.179 fused_ordering(720) 00:13:23.179 fused_ordering(721) 00:13:23.179 fused_ordering(722) 00:13:23.179 fused_ordering(723) 00:13:23.179 fused_ordering(724) 00:13:23.179 fused_ordering(725) 00:13:23.179 fused_ordering(726) 00:13:23.179 fused_ordering(727) 00:13:23.179 fused_ordering(728) 00:13:23.179 fused_ordering(729) 00:13:23.179 fused_ordering(730) 00:13:23.179 fused_ordering(731) 00:13:23.179 fused_ordering(732) 00:13:23.179 fused_ordering(733) 00:13:23.179 fused_ordering(734) 00:13:23.179 fused_ordering(735) 00:13:23.179 fused_ordering(736) 00:13:23.179 fused_ordering(737) 00:13:23.179 fused_ordering(738) 00:13:23.179 fused_ordering(739) 00:13:23.179 fused_ordering(740) 00:13:23.179 fused_ordering(741) 00:13:23.179 fused_ordering(742) 00:13:23.179 fused_ordering(743) 00:13:23.179 fused_ordering(744) 00:13:23.179 fused_ordering(745) 00:13:23.179 fused_ordering(746) 00:13:23.179 fused_ordering(747) 00:13:23.179 fused_ordering(748) 00:13:23.179 fused_ordering(749) 00:13:23.179 fused_ordering(750) 00:13:23.179 fused_ordering(751) 00:13:23.179 fused_ordering(752) 00:13:23.179 fused_ordering(753) 00:13:23.179 fused_ordering(754) 00:13:23.179 fused_ordering(755) 00:13:23.179 fused_ordering(756) 00:13:23.179 fused_ordering(757) 00:13:23.179 fused_ordering(758) 00:13:23.179 fused_ordering(759) 00:13:23.179 fused_ordering(760) 00:13:23.179 fused_ordering(761) 00:13:23.179 fused_ordering(762) 00:13:23.179 fused_ordering(763) 00:13:23.179 fused_ordering(764) 00:13:23.179 fused_ordering(765) 00:13:23.180 fused_ordering(766) 00:13:23.180 fused_ordering(767) 00:13:23.180 fused_ordering(768) 00:13:23.180 fused_ordering(769) 00:13:23.180 fused_ordering(770) 00:13:23.180 fused_ordering(771) 00:13:23.180 fused_ordering(772) 00:13:23.180 fused_ordering(773) 00:13:23.180 fused_ordering(774) 00:13:23.180 fused_ordering(775) 00:13:23.180 fused_ordering(776) 00:13:23.180 fused_ordering(777) 00:13:23.180 fused_ordering(778) 00:13:23.180 fused_ordering(779) 00:13:23.180 fused_ordering(780) 00:13:23.180 fused_ordering(781) 00:13:23.180 fused_ordering(782) 00:13:23.180 fused_ordering(783) 00:13:23.180 fused_ordering(784) 00:13:23.180 fused_ordering(785) 00:13:23.180 fused_ordering(786) 00:13:23.180 fused_ordering(787) 00:13:23.180 fused_ordering(788) 00:13:23.180 fused_ordering(789) 00:13:23.180 fused_ordering(790) 00:13:23.180 fused_ordering(791) 00:13:23.180 fused_ordering(792) 00:13:23.180 fused_ordering(793) 00:13:23.180 fused_ordering(794) 00:13:23.180 fused_ordering(795) 00:13:23.180 fused_ordering(796) 00:13:23.180 fused_ordering(797) 00:13:23.180 fused_ordering(798) 00:13:23.180 fused_ordering(799) 00:13:23.180 fused_ordering(800) 00:13:23.180 fused_ordering(801) 00:13:23.180 fused_ordering(802) 00:13:23.180 fused_ordering(803) 00:13:23.180 fused_ordering(804) 00:13:23.180 fused_ordering(805) 00:13:23.180 fused_ordering(806) 00:13:23.180 fused_ordering(807) 00:13:23.180 fused_ordering(808) 00:13:23.180 fused_ordering(809) 00:13:23.180 fused_ordering(810) 00:13:23.180 fused_ordering(811) 00:13:23.180 fused_ordering(812) 00:13:23.180 fused_ordering(813) 00:13:23.180 fused_ordering(814) 00:13:23.180 fused_ordering(815) 00:13:23.180 fused_ordering(816) 00:13:23.180 fused_ordering(817) 00:13:23.180 fused_ordering(818) 00:13:23.180 fused_ordering(819) 00:13:23.180 fused_ordering(820) 00:13:23.753 fused_ordering(821) 00:13:23.753 fused_ordering(822) 00:13:23.753 fused_ordering(823) 00:13:23.753 fused_ordering(824) 00:13:23.753 fused_ordering(825) 00:13:23.753 fused_ordering(826) 00:13:23.753 fused_ordering(827) 00:13:23.753 fused_ordering(828) 00:13:23.753 fused_ordering(829) 00:13:23.753 fused_ordering(830) 00:13:23.753 fused_ordering(831) 00:13:23.753 fused_ordering(832) 00:13:23.753 fused_ordering(833) 00:13:23.753 fused_ordering(834) 00:13:23.753 fused_ordering(835) 00:13:23.753 fused_ordering(836) 00:13:23.753 fused_ordering(837) 00:13:23.753 fused_ordering(838) 00:13:23.753 fused_ordering(839) 00:13:23.753 fused_ordering(840) 00:13:23.753 fused_ordering(841) 00:13:23.753 fused_ordering(842) 00:13:23.753 fused_ordering(843) 00:13:23.753 fused_ordering(844) 00:13:23.753 fused_ordering(845) 00:13:23.753 fused_ordering(846) 00:13:23.753 fused_ordering(847) 00:13:23.753 fused_ordering(848) 00:13:23.753 fused_ordering(849) 00:13:23.753 fused_ordering(850) 00:13:23.753 fused_ordering(851) 00:13:23.753 fused_ordering(852) 00:13:23.753 fused_ordering(853) 00:13:23.753 fused_ordering(854) 00:13:23.753 fused_ordering(855) 00:13:23.753 fused_ordering(856) 00:13:23.753 fused_ordering(857) 00:13:23.753 fused_ordering(858) 00:13:23.753 fused_ordering(859) 00:13:23.753 fused_ordering(860) 00:13:23.753 fused_ordering(861) 00:13:23.753 fused_ordering(862) 00:13:23.753 fused_ordering(863) 00:13:23.753 fused_ordering(864) 00:13:23.753 fused_ordering(865) 00:13:23.753 fused_ordering(866) 00:13:23.753 fused_ordering(867) 00:13:23.753 fused_ordering(868) 00:13:23.753 fused_ordering(869) 00:13:23.753 fused_ordering(870) 00:13:23.753 fused_ordering(871) 00:13:23.753 fused_ordering(872) 00:13:23.753 fused_ordering(873) 00:13:23.753 fused_ordering(874) 00:13:23.753 fused_ordering(875) 00:13:23.753 fused_ordering(876) 00:13:23.753 fused_ordering(877) 00:13:23.753 fused_ordering(878) 00:13:23.753 fused_ordering(879) 00:13:23.753 fused_ordering(880) 00:13:23.753 fused_ordering(881) 00:13:23.753 fused_ordering(882) 00:13:23.753 fused_ordering(883) 00:13:23.753 fused_ordering(884) 00:13:23.753 fused_ordering(885) 00:13:23.753 fused_ordering(886) 00:13:23.753 fused_ordering(887) 00:13:23.753 fused_ordering(888) 00:13:23.753 fused_ordering(889) 00:13:23.753 fused_ordering(890) 00:13:23.753 fused_ordering(891) 00:13:23.753 fused_ordering(892) 00:13:23.753 fused_ordering(893) 00:13:23.753 fused_ordering(894) 00:13:23.753 fused_ordering(895) 00:13:23.753 fused_ordering(896) 00:13:23.753 fused_ordering(897) 00:13:23.753 fused_ordering(898) 00:13:23.753 fused_ordering(899) 00:13:23.753 fused_ordering(900) 00:13:23.753 fused_ordering(901) 00:13:23.753 fused_ordering(902) 00:13:23.753 fused_ordering(903) 00:13:23.753 fused_ordering(904) 00:13:23.753 fused_ordering(905) 00:13:23.753 fused_ordering(906) 00:13:23.753 fused_ordering(907) 00:13:23.753 fused_ordering(908) 00:13:23.753 fused_ordering(909) 00:13:23.753 fused_ordering(910) 00:13:23.753 fused_ordering(911) 00:13:23.753 fused_ordering(912) 00:13:23.753 fused_ordering(913) 00:13:23.753 fused_ordering(914) 00:13:23.753 fused_ordering(915) 00:13:23.753 fused_ordering(916) 00:13:23.753 fused_ordering(917) 00:13:23.753 fused_ordering(918) 00:13:23.753 fused_ordering(919) 00:13:23.753 fused_ordering(920) 00:13:23.753 fused_ordering(921) 00:13:23.753 fused_ordering(922) 00:13:23.753 fused_ordering(923) 00:13:23.753 fused_ordering(924) 00:13:23.753 fused_ordering(925) 00:13:23.753 fused_ordering(926) 00:13:23.753 fused_ordering(927) 00:13:23.753 fused_ordering(928) 00:13:23.753 fused_ordering(929) 00:13:23.753 fused_ordering(930) 00:13:23.753 fused_ordering(931) 00:13:23.753 fused_ordering(932) 00:13:23.753 fused_ordering(933) 00:13:23.753 fused_ordering(934) 00:13:23.753 fused_ordering(935) 00:13:23.753 fused_ordering(936) 00:13:23.753 fused_ordering(937) 00:13:23.753 fused_ordering(938) 00:13:23.753 fused_ordering(939) 00:13:23.753 fused_ordering(940) 00:13:23.753 fused_ordering(941) 00:13:23.754 fused_ordering(942) 00:13:23.754 fused_ordering(943) 00:13:23.754 fused_ordering(944) 00:13:23.754 fused_ordering(945) 00:13:23.754 fused_ordering(946) 00:13:23.754 fused_ordering(947) 00:13:23.754 fused_ordering(948) 00:13:23.754 fused_ordering(949) 00:13:23.754 fused_ordering(950) 00:13:23.754 fused_ordering(951) 00:13:23.754 fused_ordering(952) 00:13:23.754 fused_ordering(953) 00:13:23.754 fused_ordering(954) 00:13:23.754 fused_ordering(955) 00:13:23.754 fused_ordering(956) 00:13:23.754 fused_ordering(957) 00:13:23.754 fused_ordering(958) 00:13:23.754 fused_ordering(959) 00:13:23.754 fused_ordering(960) 00:13:23.754 fused_ordering(961) 00:13:23.754 fused_ordering(962) 00:13:23.754 fused_ordering(963) 00:13:23.754 fused_ordering(964) 00:13:23.754 fused_ordering(965) 00:13:23.754 fused_ordering(966) 00:13:23.754 fused_ordering(967) 00:13:23.754 fused_ordering(968) 00:13:23.754 fused_ordering(969) 00:13:23.754 fused_ordering(970) 00:13:23.754 fused_ordering(971) 00:13:23.754 fused_ordering(972) 00:13:23.754 fused_ordering(973) 00:13:23.754 fused_ordering(974) 00:13:23.754 fused_ordering(975) 00:13:23.754 fused_ordering(976) 00:13:23.754 fused_ordering(977) 00:13:23.754 fused_ordering(978) 00:13:23.754 fused_ordering(979) 00:13:23.754 fused_ordering(980) 00:13:23.754 fused_ordering(981) 00:13:23.754 fused_ordering(982) 00:13:23.754 fused_ordering(983) 00:13:23.754 fused_ordering(984) 00:13:23.754 fused_ordering(985) 00:13:23.754 fused_ordering(986) 00:13:23.754 fused_ordering(987) 00:13:23.754 fused_ordering(988) 00:13:23.754 fused_ordering(989) 00:13:23.754 fused_ordering(990) 00:13:23.754 fused_ordering(991) 00:13:23.754 fused_ordering(992) 00:13:23.754 fused_ordering(993) 00:13:23.754 fused_ordering(994) 00:13:23.754 fused_ordering(995) 00:13:23.754 fused_ordering(996) 00:13:23.754 fused_ordering(997) 00:13:23.754 fused_ordering(998) 00:13:23.754 fused_ordering(999) 00:13:23.754 fused_ordering(1000) 00:13:23.754 fused_ordering(1001) 00:13:23.754 fused_ordering(1002) 00:13:23.754 fused_ordering(1003) 00:13:23.754 fused_ordering(1004) 00:13:23.754 fused_ordering(1005) 00:13:23.754 fused_ordering(1006) 00:13:23.754 fused_ordering(1007) 00:13:23.754 fused_ordering(1008) 00:13:23.754 fused_ordering(1009) 00:13:23.754 fused_ordering(1010) 00:13:23.754 fused_ordering(1011) 00:13:23.754 fused_ordering(1012) 00:13:23.754 fused_ordering(1013) 00:13:23.754 fused_ordering(1014) 00:13:23.754 fused_ordering(1015) 00:13:23.754 fused_ordering(1016) 00:13:23.754 fused_ordering(1017) 00:13:23.754 fused_ordering(1018) 00:13:23.754 fused_ordering(1019) 00:13:23.754 fused_ordering(1020) 00:13:23.754 fused_ordering(1021) 00:13:23.754 fused_ordering(1022) 00:13:23.754 fused_ordering(1023) 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.754 rmmod nvme_tcp 00:13:23.754 rmmod nvme_fabrics 00:13:23.754 rmmod nvme_keyring 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2662313 ']' 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2662313 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2662313 ']' 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2662313 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.754 14:03:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662313 00:13:23.754 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:23.754 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662313' 00:13:24.015 killing process with pid 2662313 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2662313 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2662313 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.015 14:03:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.561 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.561 00:13:26.561 real 0m13.346s 00:13:26.561 user 0m7.010s 00:13:26.561 sys 0m7.131s 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.562 ************************************ 00:13:26.562 END TEST nvmf_fused_ordering 00:13:26.562 ************************************ 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.562 ************************************ 00:13:26.562 START TEST nvmf_ns_masking 00:13:26.562 ************************************ 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:26.562 * Looking for test storage... 00:13:26.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.562 --rc genhtml_branch_coverage=1 00:13:26.562 --rc genhtml_function_coverage=1 00:13:26.562 --rc genhtml_legend=1 00:13:26.562 --rc geninfo_all_blocks=1 00:13:26.562 --rc geninfo_unexecuted_blocks=1 00:13:26.562 00:13:26.562 ' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.562 --rc genhtml_branch_coverage=1 00:13:26.562 --rc genhtml_function_coverage=1 00:13:26.562 --rc genhtml_legend=1 00:13:26.562 --rc geninfo_all_blocks=1 00:13:26.562 --rc geninfo_unexecuted_blocks=1 00:13:26.562 00:13:26.562 ' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.562 --rc genhtml_branch_coverage=1 00:13:26.562 --rc genhtml_function_coverage=1 00:13:26.562 --rc genhtml_legend=1 00:13:26.562 --rc geninfo_all_blocks=1 00:13:26.562 --rc geninfo_unexecuted_blocks=1 00:13:26.562 00:13:26.562 ' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.562 --rc genhtml_branch_coverage=1 00:13:26.562 --rc genhtml_function_coverage=1 00:13:26.562 --rc genhtml_legend=1 00:13:26.562 --rc geninfo_all_blocks=1 00:13:26.562 --rc geninfo_unexecuted_blocks=1 00:13:26.562 00:13:26.562 ' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.562 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2488ce66-bbe6-439f-be4b-f356f8cd9624 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=de905952-bf4e-4403-a518-d141517f0d40 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=59f89b36-8136-4709-92e8-0159c4210ef7 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.563 14:03:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:34.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:34.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:34.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:34.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:34.702 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:34.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:13:34.703 00:13:34.703 --- 10.0.0.2 ping statistics --- 00:13:34.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.703 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:13:34.703 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:13:34.703 00:13:34.703 --- 10.0.0.1 ping statistics --- 00:13:34.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.703 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:34.703 14:03:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2667125 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2667125 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2667125 ']' 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.703 [2024-12-05 14:03:40.107712] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:13:34.703 [2024-12-05 14:03:40.107780] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.703 [2024-12-05 14:03:40.209698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.703 [2024-12-05 14:03:40.261784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.703 [2024-12-05 14:03:40.261840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.703 [2024-12-05 14:03:40.261849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.703 [2024-12-05 14:03:40.261856] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.703 [2024-12-05 14:03:40.261862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.703 [2024-12-05 14:03:40.262634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.703 14:03:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:34.963 [2024-12-05 14:03:41.138928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.963 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:34.963 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:34.963 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:35.223 Malloc1 00:13:35.223 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:35.485 Malloc2 00:13:35.485 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.746 14:03:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:35.746 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.007 [2024-12-05 14:03:42.169347] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.007 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:36.007 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 59f89b36-8136-4709-92e8-0159c4210ef7 -a 10.0.0.2 -s 4420 -i 4 00:13:36.267 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.267 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:36.267 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.267 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:36.268 14:03:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.178 [ 0]:0x1 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.178 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8599b4a320fe4592bb43ec135109c8f7 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8599b4a320fe4592bb43ec135109c8f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:38.437 [ 0]:0x1 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:38.437 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8599b4a320fe4592bb43ec135109c8f7 00:13:38.438 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8599b4a320fe4592bb43ec135109c8f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.438 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:38.698 [ 1]:0x2 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:38.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.698 14:03:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.957 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:38.957 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:38.957 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 59f89b36-8136-4709-92e8-0159c4210ef7 -a 10.0.0.2 -s 4420 -i 4 00:13:39.216 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:39.216 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:39.216 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.216 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:39.216 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:39.216 14:03:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:41.122 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.382 [ 0]:0x2 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.382 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.642 [ 0]:0x1 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8599b4a320fe4592bb43ec135109c8f7 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8599b4a320fe4592bb43ec135109c8f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.642 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.643 [ 1]:0x2 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.643 14:03:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:41.903 [ 0]:0x2 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:41.903 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:42.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.163 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:42.163 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:42.163 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 59f89b36-8136-4709-92e8-0159c4210ef7 -a 10.0.0.2 -s 4420 -i 4 00:13:42.424 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:42.424 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:42.424 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.424 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:42.424 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:42.424 14:03:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.336 [ 0]:0x1 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.336 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=8599b4a320fe4592bb43ec135109c8f7 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 8599b4a320fe4592bb43ec135109c8f7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.598 [ 1]:0x2 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.598 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:44.859 [ 0]:0x2 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:44.859 14:03:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:44.859 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:45.120 [2024-12-05 14:03:51.170010] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:45.120 request: 00:13:45.120 { 00:13:45.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.120 "nsid": 2, 00:13:45.120 "host": "nqn.2016-06.io.spdk:host1", 00:13:45.120 "method": "nvmf_ns_remove_host", 00:13:45.120 "req_id": 1 00:13:45.120 } 00:13:45.120 Got JSON-RPC error response 00:13:45.120 response: 00:13:45.120 { 00:13:45.120 "code": -32602, 00:13:45.120 "message": "Invalid parameters" 00:13:45.120 } 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:45.120 [ 0]:0x2 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7df55322d5314ae88b0b9d830c1eee6a 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7df55322d5314ae88b0b9d830c1eee6a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:45.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2669513 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2669513 /var/tmp/host.sock 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2669513 ']' 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:45.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.120 14:03:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:45.381 [2024-12-05 14:03:51.421569] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:13:45.381 [2024-12-05 14:03:51.421622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669513 ] 00:13:45.381 [2024-12-05 14:03:51.508760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.381 [2024-12-05 14:03:51.544647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.951 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.951 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:45.951 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.211 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:46.472 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2488ce66-bbe6-439f-be4b-f356f8cd9624 00:13:46.472 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.472 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2488CE66BBE6439FBE4BF356F8CD9624 -i 00:13:46.472 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid de905952-bf4e-4403-a518-d141517f0d40 00:13:46.472 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:46.472 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DE905952BF4E4403A518D141517F0D40 -i 00:13:46.732 14:03:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:46.992 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:47.251 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:47.251 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:47.511 nvme0n1 00:13:47.511 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:47.511 14:03:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:48.085 nvme1n2 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:48.085 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:48.345 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2488ce66-bbe6-439f-be4b-f356f8cd9624 == \2\4\8\8\c\e\6\6\-\b\b\e\6\-\4\3\9\f\-\b\e\4\b\-\f\3\5\6\f\8\c\d\9\6\2\4 ]] 00:13:48.345 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:48.345 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:48.345 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:48.605 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ de905952-bf4e-4403-a518-d141517f0d40 == \d\e\9\0\5\9\5\2\-\b\f\4\e\-\4\4\0\3\-\a\5\1\8\-\d\1\4\1\5\1\7\f\0\d\4\0 ]] 00:13:48.605 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.605 14:03:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 2488ce66-bbe6-439f-be4b-f356f8cd9624 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2488CE66BBE6439FBE4BF356F8CD9624 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2488CE66BBE6439FBE4BF356F8CD9624 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:48.866 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 2488CE66BBE6439FBE4BF356F8CD9624 00:13:49.126 [2024-12-05 14:03:55.192501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:49.126 [2024-12-05 14:03:55.192529] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:49.126 [2024-12-05 14:03:55.192536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:49.126 request: 00:13:49.126 { 00:13:49.126 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.126 "namespace": { 00:13:49.126 "bdev_name": "invalid", 00:13:49.126 "nsid": 1, 00:13:49.126 "nguid": "2488CE66BBE6439FBE4BF356F8CD9624", 00:13:49.126 "no_auto_visible": false, 00:13:49.126 "hide_metadata": false 00:13:49.126 }, 00:13:49.126 "method": "nvmf_subsystem_add_ns", 00:13:49.126 "req_id": 1 00:13:49.126 } 00:13:49.126 Got JSON-RPC error response 00:13:49.126 response: 00:13:49.126 { 00:13:49.126 "code": -32602, 00:13:49.126 "message": "Invalid parameters" 00:13:49.126 } 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 2488ce66-bbe6-439f-be4b-f356f8cd9624 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2488CE66BBE6439FBE4BF356F8CD9624 -i 00:13:49.126 14:03:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2669513 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2669513 ']' 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2669513 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2669513 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2669513' 00:13:51.671 killing process with pid 2669513 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2669513 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2669513 00:13:51.671 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.932 14:03:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.932 rmmod nvme_tcp 00:13:51.932 rmmod nvme_fabrics 00:13:51.932 rmmod nvme_keyring 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2667125 ']' 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2667125 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2667125 ']' 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2667125 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2667125 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2667125' 00:13:51.932 killing process with pid 2667125 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2667125 00:13:51.932 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2667125 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.192 14:03:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:54.107 00:13:54.107 real 0m27.997s 00:13:54.107 user 0m32.160s 00:13:54.107 sys 0m8.114s 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:54.107 ************************************ 00:13:54.107 END TEST nvmf_ns_masking 00:13:54.107 ************************************ 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.107 ************************************ 00:13:54.107 START TEST nvmf_nvme_cli 00:13:54.107 ************************************ 00:13:54.107 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:54.369 * Looking for test storage... 00:13:54.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:54.369 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:54.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.370 --rc genhtml_branch_coverage=1 00:13:54.370 --rc genhtml_function_coverage=1 00:13:54.370 --rc genhtml_legend=1 00:13:54.370 --rc geninfo_all_blocks=1 00:13:54.370 --rc geninfo_unexecuted_blocks=1 00:13:54.370 00:13:54.370 ' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:54.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.370 --rc genhtml_branch_coverage=1 00:13:54.370 --rc genhtml_function_coverage=1 00:13:54.370 --rc genhtml_legend=1 00:13:54.370 --rc geninfo_all_blocks=1 00:13:54.370 --rc geninfo_unexecuted_blocks=1 00:13:54.370 00:13:54.370 ' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:54.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.370 --rc genhtml_branch_coverage=1 00:13:54.370 --rc genhtml_function_coverage=1 00:13:54.370 --rc genhtml_legend=1 00:13:54.370 --rc geninfo_all_blocks=1 00:13:54.370 --rc geninfo_unexecuted_blocks=1 00:13:54.370 00:13:54.370 ' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:54.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.370 --rc genhtml_branch_coverage=1 00:13:54.370 --rc genhtml_function_coverage=1 00:13:54.370 --rc genhtml_legend=1 00:13:54.370 --rc geninfo_all_blocks=1 00:13:54.370 --rc geninfo_unexecuted_blocks=1 00:13:54.370 00:13:54.370 ' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.370 14:04:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.515 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.516 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.516 14:04:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:14:02.516 00:14:02.516 --- 10.0.0.2 ping statistics --- 00:14:02.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.516 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:14:02.516 00:14:02.516 --- 10.0.0.1 ping statistics --- 00:14:02.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.516 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.516 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2675014 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2675014 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2675014 ']' 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.517 14:04:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.517 [2024-12-05 14:04:08.174660] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:14:02.517 [2024-12-05 14:04:08.174725] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.517 [2024-12-05 14:04:08.273465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.517 [2024-12-05 14:04:08.328800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.517 [2024-12-05 14:04:08.328854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.517 [2024-12-05 14:04:08.328863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.517 [2024-12-05 14:04:08.328870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.517 [2024-12-05 14:04:08.328877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.517 [2024-12-05 14:04:08.331036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.517 [2024-12-05 14:04:08.331076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.517 [2024-12-05 14:04:08.331203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.517 [2024-12-05 14:04:08.331203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.778 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.779 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:02.779 [2024-12-05 14:04:09.061074] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.779 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.779 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.779 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.779 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.039 Malloc0 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.039 Malloc1 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:03.039 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 [2024-12-05 14:04:09.174489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.040 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:03.300 00:14:03.300 Discovery Log Number of Records 2, Generation counter 2 00:14:03.300 =====Discovery Log Entry 0====== 00:14:03.300 trtype: tcp 00:14:03.300 adrfam: ipv4 00:14:03.300 subtype: current discovery subsystem 00:14:03.300 treq: not required 00:14:03.300 portid: 0 00:14:03.300 trsvcid: 4420 00:14:03.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:03.300 traddr: 10.0.0.2 00:14:03.300 eflags: explicit discovery connections, duplicate discovery information 00:14:03.300 sectype: none 00:14:03.300 =====Discovery Log Entry 1====== 00:14:03.300 trtype: tcp 00:14:03.300 adrfam: ipv4 00:14:03.300 subtype: nvme subsystem 00:14:03.300 treq: not required 00:14:03.300 portid: 0 00:14:03.300 trsvcid: 4420 00:14:03.300 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:03.300 traddr: 10.0.0.2 00:14:03.300 eflags: none 00:14:03.300 sectype: none 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:03.300 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:03.301 14:04:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.685 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:04.685 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:04.685 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.685 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:04.685 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:04.685 14:04:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:07.241 /dev/nvme0n2 ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:07.241 14:04:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.241 rmmod nvme_tcp 00:14:07.241 rmmod nvme_fabrics 00:14:07.241 rmmod nvme_keyring 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2675014 ']' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2675014 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2675014 ']' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2675014 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675014 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675014' 00:14:07.241 killing process with pid 2675014 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2675014 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2675014 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.241 14:04:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:09.300 00:14:09.300 real 0m15.052s 00:14:09.300 user 0m22.331s 00:14:09.300 sys 0m6.327s 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.300 ************************************ 00:14:09.300 END TEST nvmf_nvme_cli 00:14:09.300 ************************************ 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.300 ************************************ 00:14:09.300 START TEST nvmf_vfio_user 00:14:09.300 ************************************ 00:14:09.300 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:09.562 * Looking for test storage... 00:14:09.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:09.562 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:09.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.563 --rc genhtml_branch_coverage=1 00:14:09.563 --rc genhtml_function_coverage=1 00:14:09.563 --rc genhtml_legend=1 00:14:09.563 --rc geninfo_all_blocks=1 00:14:09.563 --rc geninfo_unexecuted_blocks=1 00:14:09.563 00:14:09.563 ' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:09.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.563 --rc genhtml_branch_coverage=1 00:14:09.563 --rc genhtml_function_coverage=1 00:14:09.563 --rc genhtml_legend=1 00:14:09.563 --rc geninfo_all_blocks=1 00:14:09.563 --rc geninfo_unexecuted_blocks=1 00:14:09.563 00:14:09.563 ' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:09.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.563 --rc genhtml_branch_coverage=1 00:14:09.563 --rc genhtml_function_coverage=1 00:14:09.563 --rc genhtml_legend=1 00:14:09.563 --rc geninfo_all_blocks=1 00:14:09.563 --rc geninfo_unexecuted_blocks=1 00:14:09.563 00:14:09.563 ' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:09.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:09.563 --rc genhtml_branch_coverage=1 00:14:09.563 --rc genhtml_function_coverage=1 00:14:09.563 --rc genhtml_legend=1 00:14:09.563 --rc geninfo_all_blocks=1 00:14:09.563 --rc geninfo_unexecuted_blocks=1 00:14:09.563 00:14:09.563 ' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:09.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2676717 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2676717' 00:14:09.563 Process pid: 2676717 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2676717 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2676717 ']' 00:14:09.563 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.564 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:09.564 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.564 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:09.564 14:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:09.564 [2024-12-05 14:04:15.830889] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:14:09.564 [2024-12-05 14:04:15.830953] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.824 [2024-12-05 14:04:15.917842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.824 [2024-12-05 14:04:15.952289] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.824 [2024-12-05 14:04:15.952319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.824 [2024-12-05 14:04:15.952324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.824 [2024-12-05 14:04:15.952330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.824 [2024-12-05 14:04:15.952334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.824 [2024-12-05 14:04:15.953709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.824 [2024-12-05 14:04:15.953857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.824 [2024-12-05 14:04:15.954007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.824 [2024-12-05 14:04:15.954009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.394 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.394 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:10.394 14:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:11.778 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:11.778 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:11.778 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:11.778 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:11.778 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:11.778 14:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:11.778 Malloc1 00:14:11.778 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:12.038 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:12.300 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:12.300 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:12.300 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:12.300 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.562 Malloc2 00:14:12.562 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:12.823 14:04:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:12.823 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:13.083 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:13.083 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:13.083 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:13.083 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:13.083 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:13.083 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:13.083 [2024-12-05 14:04:19.323950] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:14:13.084 [2024-12-05 14:04:19.323996] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677417 ] 00:14:13.084 [2024-12-05 14:04:19.363736] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:13.084 [2024-12-05 14:04:19.366003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.084 [2024-12-05 14:04:19.366020] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f73121b5000 00:14:13.084 [2024-12-05 14:04:19.367005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.368004] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.369005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.370013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.371020] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.372021] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.373024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.374035] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:13.084 [2024-12-05 14:04:19.375043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:13.084 [2024-12-05 14:04:19.375050] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f73121aa000 00:14:13.084 [2024-12-05 14:04:19.375961] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.346 [2024-12-05 14:04:19.388733] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:13.346 [2024-12-05 14:04:19.388755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:13.346 [2024-12-05 14:04:19.394156] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:13.346 [2024-12-05 14:04:19.394192] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:13.346 [2024-12-05 14:04:19.394260] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:13.346 [2024-12-05 14:04:19.394274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:13.346 [2024-12-05 14:04:19.394278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:13.346 [2024-12-05 14:04:19.395150] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:13.346 [2024-12-05 14:04:19.395158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:13.346 [2024-12-05 14:04:19.395163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:13.346 [2024-12-05 14:04:19.396156] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:13.346 [2024-12-05 14:04:19.396162] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:13.346 [2024-12-05 14:04:19.396168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:13.346 [2024-12-05 14:04:19.397160] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:13.346 [2024-12-05 14:04:19.397166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:13.346 [2024-12-05 14:04:19.398156] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:13.346 [2024-12-05 14:04:19.398162] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:13.346 [2024-12-05 14:04:19.398166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:13.346 [2024-12-05 14:04:19.398171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:13.346 [2024-12-05 14:04:19.398277] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:13.346 [2024-12-05 14:04:19.398280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:13.346 [2024-12-05 14:04:19.398284] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:13.346 [2024-12-05 14:04:19.399166] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:13.346 [2024-12-05 14:04:19.400172] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:13.346 [2024-12-05 14:04:19.401177] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:13.346 [2024-12-05 14:04:19.402173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:13.346 [2024-12-05 14:04:19.402221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:13.346 [2024-12-05 14:04:19.403183] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:13.346 [2024-12-05 14:04:19.403191] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:13.346 [2024-12-05 14:04:19.403195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:13.346 [2024-12-05 14:04:19.403210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:13.346 [2024-12-05 14:04:19.403215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:13.346 [2024-12-05 14:04:19.403228] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.346 [2024-12-05 14:04:19.403232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.346 [2024-12-05 14:04:19.403235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.346 [2024-12-05 14:04:19.403246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.346 [2024-12-05 14:04:19.403287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:13.346 [2024-12-05 14:04:19.403295] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:13.346 [2024-12-05 14:04:19.403299] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:13.347 [2024-12-05 14:04:19.403303] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:13.347 [2024-12-05 14:04:19.403306] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:13.347 [2024-12-05 14:04:19.403310] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:13.347 [2024-12-05 14:04:19.403314] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:13.347 [2024-12-05 14:04:19.403317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.347 [2024-12-05 14:04:19.403355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.347 [2024-12-05 14:04:19.403361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.347 [2024-12-05 14:04:19.403367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:13.347 [2024-12-05 14:04:19.403371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403385] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403396] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:13.347 [2024-12-05 14:04:19.403400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403486] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:13.347 [2024-12-05 14:04:19.403489] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:13.347 [2024-12-05 14:04:19.403492] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.347 [2024-12-05 14:04:19.403496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403516] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:13.347 [2024-12-05 14:04:19.403525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403536] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.347 [2024-12-05 14:04:19.403539] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.347 [2024-12-05 14:04:19.403541] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.347 [2024-12-05 14:04:19.403546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403571] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403582] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:13.347 [2024-12-05 14:04:19.403586] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.347 [2024-12-05 14:04:19.403588] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.347 [2024-12-05 14:04:19.403593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403625] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403636] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:13.347 [2024-12-05 14:04:19.403639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:13.347 [2024-12-05 14:04:19.403643] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:13.347 [2024-12-05 14:04:19.403657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403725] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:13.347 [2024-12-05 14:04:19.403728] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:13.347 [2024-12-05 14:04:19.403731] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:13.347 [2024-12-05 14:04:19.403733] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:13.347 [2024-12-05 14:04:19.403736] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:13.347 [2024-12-05 14:04:19.403740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:13.347 [2024-12-05 14:04:19.403747] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:13.347 [2024-12-05 14:04:19.403750] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:13.347 [2024-12-05 14:04:19.403753] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.347 [2024-12-05 14:04:19.403757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403762] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:13.347 [2024-12-05 14:04:19.403765] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:13.347 [2024-12-05 14:04:19.403768] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.347 [2024-12-05 14:04:19.403772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403778] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:13.347 [2024-12-05 14:04:19.403781] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:13.347 [2024-12-05 14:04:19.403783] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:13.347 [2024-12-05 14:04:19.403787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:13.347 [2024-12-05 14:04:19.403792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:13.347 [2024-12-05 14:04:19.403808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:13.348 [2024-12-05 14:04:19.403813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:13.348 ===================================================== 00:14:13.348 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:13.348 ===================================================== 00:14:13.348 Controller Capabilities/Features 00:14:13.348 ================================ 00:14:13.348 Vendor ID: 4e58 00:14:13.348 Subsystem Vendor ID: 4e58 00:14:13.348 Serial Number: SPDK1 00:14:13.348 Model Number: SPDK bdev Controller 00:14:13.348 Firmware Version: 25.01 00:14:13.348 Recommended Arb Burst: 6 00:14:13.348 IEEE OUI Identifier: 8d 6b 50 00:14:13.348 Multi-path I/O 00:14:13.348 May have multiple subsystem ports: Yes 00:14:13.348 May have multiple controllers: Yes 00:14:13.348 Associated with SR-IOV VF: No 00:14:13.348 Max Data Transfer Size: 131072 00:14:13.348 Max Number of Namespaces: 32 00:14:13.348 Max Number of I/O Queues: 127 00:14:13.348 NVMe Specification Version (VS): 1.3 00:14:13.348 NVMe Specification Version (Identify): 1.3 00:14:13.348 Maximum Queue Entries: 256 00:14:13.348 Contiguous Queues Required: Yes 00:14:13.348 Arbitration Mechanisms Supported 00:14:13.348 Weighted Round Robin: Not Supported 00:14:13.348 Vendor Specific: Not Supported 00:14:13.348 Reset Timeout: 15000 ms 00:14:13.348 Doorbell Stride: 4 bytes 00:14:13.348 NVM Subsystem Reset: Not Supported 00:14:13.348 Command Sets Supported 00:14:13.348 NVM Command Set: Supported 00:14:13.348 Boot Partition: Not Supported 00:14:13.348 Memory Page Size Minimum: 4096 bytes 00:14:13.348 Memory Page Size Maximum: 4096 bytes 00:14:13.348 Persistent Memory Region: Not Supported 00:14:13.348 Optional Asynchronous Events Supported 00:14:13.348 Namespace Attribute Notices: Supported 00:14:13.348 Firmware Activation Notices: Not Supported 00:14:13.348 ANA Change Notices: Not Supported 00:14:13.348 PLE Aggregate Log Change Notices: Not Supported 00:14:13.348 LBA Status Info Alert Notices: Not Supported 00:14:13.348 EGE Aggregate Log Change Notices: Not Supported 00:14:13.348 Normal NVM Subsystem Shutdown event: Not Supported 00:14:13.348 Zone Descriptor Change Notices: Not Supported 00:14:13.348 Discovery Log Change Notices: Not Supported 00:14:13.348 Controller Attributes 00:14:13.348 128-bit Host Identifier: Supported 00:14:13.348 Non-Operational Permissive Mode: Not Supported 00:14:13.348 NVM Sets: Not Supported 00:14:13.348 Read Recovery Levels: Not Supported 00:14:13.348 Endurance Groups: Not Supported 00:14:13.348 Predictable Latency Mode: Not Supported 00:14:13.348 Traffic Based Keep ALive: Not Supported 00:14:13.348 Namespace Granularity: Not Supported 00:14:13.348 SQ Associations: Not Supported 00:14:13.348 UUID List: Not Supported 00:14:13.348 Multi-Domain Subsystem: Not Supported 00:14:13.348 Fixed Capacity Management: Not Supported 00:14:13.348 Variable Capacity Management: Not Supported 00:14:13.348 Delete Endurance Group: Not Supported 00:14:13.348 Delete NVM Set: Not Supported 00:14:13.348 Extended LBA Formats Supported: Not Supported 00:14:13.348 Flexible Data Placement Supported: Not Supported 00:14:13.348 00:14:13.348 Controller Memory Buffer Support 00:14:13.348 ================================ 00:14:13.348 Supported: No 00:14:13.348 00:14:13.348 Persistent Memory Region Support 00:14:13.348 ================================ 00:14:13.348 Supported: No 00:14:13.348 00:14:13.348 Admin Command Set Attributes 00:14:13.348 ============================ 00:14:13.348 Security Send/Receive: Not Supported 00:14:13.348 Format NVM: Not Supported 00:14:13.348 Firmware Activate/Download: Not Supported 00:14:13.348 Namespace Management: Not Supported 00:14:13.348 Device Self-Test: Not Supported 00:14:13.348 Directives: Not Supported 00:14:13.348 NVMe-MI: Not Supported 00:14:13.348 Virtualization Management: Not Supported 00:14:13.348 Doorbell Buffer Config: Not Supported 00:14:13.348 Get LBA Status Capability: Not Supported 00:14:13.348 Command & Feature Lockdown Capability: Not Supported 00:14:13.348 Abort Command Limit: 4 00:14:13.348 Async Event Request Limit: 4 00:14:13.348 Number of Firmware Slots: N/A 00:14:13.348 Firmware Slot 1 Read-Only: N/A 00:14:13.348 Firmware Activation Without Reset: N/A 00:14:13.348 Multiple Update Detection Support: N/A 00:14:13.348 Firmware Update Granularity: No Information Provided 00:14:13.348 Per-Namespace SMART Log: No 00:14:13.348 Asymmetric Namespace Access Log Page: Not Supported 00:14:13.348 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:13.348 Command Effects Log Page: Supported 00:14:13.348 Get Log Page Extended Data: Supported 00:14:13.348 Telemetry Log Pages: Not Supported 00:14:13.348 Persistent Event Log Pages: Not Supported 00:14:13.348 Supported Log Pages Log Page: May Support 00:14:13.348 Commands Supported & Effects Log Page: Not Supported 00:14:13.348 Feature Identifiers & Effects Log Page:May Support 00:14:13.348 NVMe-MI Commands & Effects Log Page: May Support 00:14:13.348 Data Area 4 for Telemetry Log: Not Supported 00:14:13.348 Error Log Page Entries Supported: 128 00:14:13.348 Keep Alive: Supported 00:14:13.348 Keep Alive Granularity: 10000 ms 00:14:13.348 00:14:13.348 NVM Command Set Attributes 00:14:13.348 ========================== 00:14:13.348 Submission Queue Entry Size 00:14:13.348 Max: 64 00:14:13.348 Min: 64 00:14:13.348 Completion Queue Entry Size 00:14:13.348 Max: 16 00:14:13.348 Min: 16 00:14:13.348 Number of Namespaces: 32 00:14:13.348 Compare Command: Supported 00:14:13.348 Write Uncorrectable Command: Not Supported 00:14:13.348 Dataset Management Command: Supported 00:14:13.348 Write Zeroes Command: Supported 00:14:13.348 Set Features Save Field: Not Supported 00:14:13.348 Reservations: Not Supported 00:14:13.348 Timestamp: Not Supported 00:14:13.348 Copy: Supported 00:14:13.348 Volatile Write Cache: Present 00:14:13.348 Atomic Write Unit (Normal): 1 00:14:13.348 Atomic Write Unit (PFail): 1 00:14:13.348 Atomic Compare & Write Unit: 1 00:14:13.348 Fused Compare & Write: Supported 00:14:13.348 Scatter-Gather List 00:14:13.348 SGL Command Set: Supported (Dword aligned) 00:14:13.348 SGL Keyed: Not Supported 00:14:13.348 SGL Bit Bucket Descriptor: Not Supported 00:14:13.348 SGL Metadata Pointer: Not Supported 00:14:13.348 Oversized SGL: Not Supported 00:14:13.348 SGL Metadata Address: Not Supported 00:14:13.348 SGL Offset: Not Supported 00:14:13.348 Transport SGL Data Block: Not Supported 00:14:13.348 Replay Protected Memory Block: Not Supported 00:14:13.348 00:14:13.348 Firmware Slot Information 00:14:13.348 ========================= 00:14:13.348 Active slot: 1 00:14:13.348 Slot 1 Firmware Revision: 25.01 00:14:13.348 00:14:13.348 00:14:13.348 Commands Supported and Effects 00:14:13.348 ============================== 00:14:13.348 Admin Commands 00:14:13.348 -------------- 00:14:13.348 Get Log Page (02h): Supported 00:14:13.348 Identify (06h): Supported 00:14:13.348 Abort (08h): Supported 00:14:13.348 Set Features (09h): Supported 00:14:13.348 Get Features (0Ah): Supported 00:14:13.348 Asynchronous Event Request (0Ch): Supported 00:14:13.348 Keep Alive (18h): Supported 00:14:13.348 I/O Commands 00:14:13.348 ------------ 00:14:13.348 Flush (00h): Supported LBA-Change 00:14:13.348 Write (01h): Supported LBA-Change 00:14:13.348 Read (02h): Supported 00:14:13.348 Compare (05h): Supported 00:14:13.348 Write Zeroes (08h): Supported LBA-Change 00:14:13.348 Dataset Management (09h): Supported LBA-Change 00:14:13.348 Copy (19h): Supported LBA-Change 00:14:13.348 00:14:13.348 Error Log 00:14:13.348 ========= 00:14:13.348 00:14:13.348 Arbitration 00:14:13.348 =========== 00:14:13.348 Arbitration Burst: 1 00:14:13.348 00:14:13.348 Power Management 00:14:13.348 ================ 00:14:13.348 Number of Power States: 1 00:14:13.348 Current Power State: Power State #0 00:14:13.348 Power State #0: 00:14:13.348 Max Power: 0.00 W 00:14:13.348 Non-Operational State: Operational 00:14:13.348 Entry Latency: Not Reported 00:14:13.348 Exit Latency: Not Reported 00:14:13.348 Relative Read Throughput: 0 00:14:13.348 Relative Read Latency: 0 00:14:13.348 Relative Write Throughput: 0 00:14:13.348 Relative Write Latency: 0 00:14:13.348 Idle Power: Not Reported 00:14:13.348 Active Power: Not Reported 00:14:13.348 Non-Operational Permissive Mode: Not Supported 00:14:13.348 00:14:13.348 Health Information 00:14:13.348 ================== 00:14:13.348 Critical Warnings: 00:14:13.348 Available Spare Space: OK 00:14:13.348 Temperature: OK 00:14:13.348 Device Reliability: OK 00:14:13.349 Read Only: No 00:14:13.349 Volatile Memory Backup: OK 00:14:13.349 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:13.349 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:13.349 Available Spare: 0% 00:14:13.349 Available Sp[2024-12-05 14:04:19.403883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:13.349 [2024-12-05 14:04:19.403892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:13.349 [2024-12-05 14:04:19.403914] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:13.349 [2024-12-05 14:04:19.403921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.349 [2024-12-05 14:04:19.403926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.349 [2024-12-05 14:04:19.403930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.349 [2024-12-05 14:04:19.403935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:13.349 [2024-12-05 14:04:19.404189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:13.349 [2024-12-05 14:04:19.404197] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:13.349 [2024-12-05 14:04:19.405195] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:13.349 [2024-12-05 14:04:19.405236] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:13.349 [2024-12-05 14:04:19.405242] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:13.349 [2024-12-05 14:04:19.406203] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:13.349 [2024-12-05 14:04:19.406211] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:13.349 [2024-12-05 14:04:19.406260] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:13.349 [2024-12-05 14:04:19.407228] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:13.349 are Threshold: 0% 00:14:13.349 Life Percentage Used: 0% 00:14:13.349 Data Units Read: 0 00:14:13.349 Data Units Written: 0 00:14:13.349 Host Read Commands: 0 00:14:13.349 Host Write Commands: 0 00:14:13.349 Controller Busy Time: 0 minutes 00:14:13.349 Power Cycles: 0 00:14:13.349 Power On Hours: 0 hours 00:14:13.349 Unsafe Shutdowns: 0 00:14:13.349 Unrecoverable Media Errors: 0 00:14:13.349 Lifetime Error Log Entries: 0 00:14:13.349 Warning Temperature Time: 0 minutes 00:14:13.349 Critical Temperature Time: 0 minutes 00:14:13.349 00:14:13.349 Number of Queues 00:14:13.349 ================ 00:14:13.349 Number of I/O Submission Queues: 127 00:14:13.349 Number of I/O Completion Queues: 127 00:14:13.349 00:14:13.349 Active Namespaces 00:14:13.349 ================= 00:14:13.349 Namespace ID:1 00:14:13.349 Error Recovery Timeout: Unlimited 00:14:13.349 Command Set Identifier: NVM (00h) 00:14:13.349 Deallocate: Supported 00:14:13.349 Deallocated/Unwritten Error: Not Supported 00:14:13.349 Deallocated Read Value: Unknown 00:14:13.349 Deallocate in Write Zeroes: Not Supported 00:14:13.349 Deallocated Guard Field: 0xFFFF 00:14:13.349 Flush: Supported 00:14:13.349 Reservation: Supported 00:14:13.349 Namespace Sharing Capabilities: Multiple Controllers 00:14:13.349 Size (in LBAs): 131072 (0GiB) 00:14:13.349 Capacity (in LBAs): 131072 (0GiB) 00:14:13.349 Utilization (in LBAs): 131072 (0GiB) 00:14:13.349 NGUID: 8CA1D1118D3948E6992AE77187A2BD16 00:14:13.349 UUID: 8ca1d111-8d39-48e6-992a-e77187a2bd16 00:14:13.349 Thin Provisioning: Not Supported 00:14:13.349 Per-NS Atomic Units: Yes 00:14:13.349 Atomic Boundary Size (Normal): 0 00:14:13.349 Atomic Boundary Size (PFail): 0 00:14:13.349 Atomic Boundary Offset: 0 00:14:13.349 Maximum Single Source Range Length: 65535 00:14:13.349 Maximum Copy Length: 65535 00:14:13.349 Maximum Source Range Count: 1 00:14:13.349 NGUID/EUI64 Never Reused: No 00:14:13.349 Namespace Write Protected: No 00:14:13.349 Number of LBA Formats: 1 00:14:13.349 Current LBA Format: LBA Format #00 00:14:13.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:13.349 00:14:13.349 14:04:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:13.349 [2024-12-05 14:04:19.593118] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:18.635 Initializing NVMe Controllers 00:14:18.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:18.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:18.635 Initialization complete. Launching workers. 00:14:18.635 ======================================================== 00:14:18.635 Latency(us) 00:14:18.635 Device Information : IOPS MiB/s Average min max 00:14:18.635 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39879.42 155.78 3209.55 871.81 10991.67 00:14:18.635 ======================================================== 00:14:18.635 Total : 39879.42 155.78 3209.55 871.81 10991.67 00:14:18.635 00:14:18.635 [2024-12-05 14:04:24.612930] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:18.635 14:04:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:18.635 [2024-12-05 14:04:24.802754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:23.917 Initializing NVMe Controllers 00:14:23.917 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:23.917 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:23.917 Initialization complete. Launching workers. 00:14:23.917 ======================================================== 00:14:23.917 Latency(us) 00:14:23.917 Device Information : IOPS MiB/s Average min max 00:14:23.917 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.13 62.73 7976.09 5993.24 9968.50 00:14:23.917 ======================================================== 00:14:23.917 Total : 16059.13 62.73 7976.09 5993.24 9968.50 00:14:23.917 00:14:23.917 [2024-12-05 14:04:29.843880] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:23.917 14:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:23.917 [2024-12-05 14:04:30.046764] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.201 [2024-12-05 14:04:35.135738] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.201 Initializing NVMe Controllers 00:14:29.201 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.201 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:29.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:29.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:29.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:29.201 Initialization complete. Launching workers. 00:14:29.201 Starting thread on core 2 00:14:29.201 Starting thread on core 3 00:14:29.201 Starting thread on core 1 00:14:29.201 14:04:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:29.201 [2024-12-05 14:04:35.385635] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.494 [2024-12-05 14:04:38.446739] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.494 Initializing NVMe Controllers 00:14:32.494 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.494 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.494 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:32.494 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:32.494 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:32.494 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:32.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:32.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:32.494 Initialization complete. Launching workers. 00:14:32.494 Starting thread on core 1 with urgent priority queue 00:14:32.494 Starting thread on core 2 with urgent priority queue 00:14:32.494 Starting thread on core 3 with urgent priority queue 00:14:32.494 Starting thread on core 0 with urgent priority queue 00:14:32.494 SPDK bdev Controller (SPDK1 ) core 0: 11900.00 IO/s 8.40 secs/100000 ios 00:14:32.494 SPDK bdev Controller (SPDK1 ) core 1: 7751.67 IO/s 12.90 secs/100000 ios 00:14:32.494 SPDK bdev Controller (SPDK1 ) core 2: 12382.67 IO/s 8.08 secs/100000 ios 00:14:32.494 SPDK bdev Controller (SPDK1 ) core 3: 13109.33 IO/s 7.63 secs/100000 ios 00:14:32.494 ======================================================== 00:14:32.494 00:14:32.494 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:32.494 [2024-12-05 14:04:38.695880] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.494 Initializing NVMe Controllers 00:14:32.494 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.494 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:32.494 Namespace ID: 1 size: 0GB 00:14:32.494 Initialization complete. 00:14:32.494 INFO: using host memory buffer for IO 00:14:32.494 Hello world! 00:14:32.494 [2024-12-05 14:04:38.730091] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.494 14:04:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:32.755 [2024-12-05 14:04:38.969893] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.696 Initializing NVMe Controllers 00:14:33.696 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.696 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:33.696 Initialization complete. Launching workers. 00:14:33.696 submit (in ns) avg, min, max = 5993.6, 2833.3, 3999047.5 00:14:33.696 complete (in ns) avg, min, max = 15689.5, 1651.7, 4025443.3 00:14:33.696 00:14:33.696 Submit histogram 00:14:33.696 ================ 00:14:33.696 Range in us Cumulative Count 00:14:33.696 2.827 - 2.840: 0.0499% ( 10) 00:14:33.696 2.840 - 2.853: 0.3246% ( 55) 00:14:33.696 2.853 - 2.867: 1.7128% ( 278) 00:14:33.696 2.867 - 2.880: 4.6290% ( 584) 00:14:33.696 2.880 - 2.893: 9.2929% ( 934) 00:14:33.696 2.893 - 2.907: 14.1566% ( 974) 00:14:33.696 2.907 - 2.920: 20.2337% ( 1217) 00:14:33.696 2.920 - 2.933: 27.0149% ( 1358) 00:14:33.696 2.933 - 2.947: 32.4928% ( 1097) 00:14:33.696 2.947 - 2.960: 37.5612% ( 1015) 00:14:33.696 2.960 - 2.973: 42.7295% ( 1035) 00:14:33.696 2.973 - 2.987: 48.4470% ( 1145) 00:14:33.696 2.987 - 3.000: 54.9586% ( 1304) 00:14:33.696 3.000 - 3.013: 63.2228% ( 1655) 00:14:33.696 3.013 - 3.027: 71.8466% ( 1727) 00:14:33.696 3.027 - 3.040: 80.2307% ( 1679) 00:14:33.696 3.040 - 3.053: 86.9220% ( 1340) 00:14:33.696 3.053 - 3.067: 92.2900% ( 1075) 00:14:33.696 3.067 - 3.080: 96.0501% ( 753) 00:14:33.696 3.080 - 3.093: 98.1274% ( 416) 00:14:33.696 3.093 - 3.107: 98.9464% ( 164) 00:14:33.696 3.107 - 3.120: 99.2809% ( 67) 00:14:33.696 3.120 - 3.133: 99.4208% ( 28) 00:14:33.696 3.133 - 3.147: 99.4757% ( 11) 00:14:33.696 3.147 - 3.160: 99.5006% ( 5) 00:14:33.696 3.160 - 3.173: 99.5106% ( 2) 00:14:33.696 3.173 - 3.187: 99.5156% ( 1) 00:14:33.696 3.267 - 3.280: 99.5206% ( 1) 00:14:33.696 3.680 - 3.707: 99.5256% ( 1) 00:14:33.696 3.707 - 3.733: 99.5306% ( 1) 00:14:33.696 3.813 - 3.840: 99.5406% ( 2) 00:14:33.696 3.840 - 3.867: 99.5456% ( 1) 00:14:33.696 4.080 - 4.107: 99.5506% ( 1) 00:14:33.696 4.133 - 4.160: 99.5556% ( 1) 00:14:33.696 4.400 - 4.427: 99.5606% ( 1) 00:14:33.696 4.480 - 4.507: 99.5656% ( 1) 00:14:33.696 4.587 - 4.613: 99.5706% ( 1) 00:14:33.696 4.667 - 4.693: 99.5756% ( 1) 00:14:33.696 4.720 - 4.747: 99.5805% ( 1) 00:14:33.696 4.747 - 4.773: 99.5855% ( 1) 00:14:33.696 4.907 - 4.933: 99.5905% ( 1) 00:14:33.696 4.960 - 4.987: 99.5955% ( 1) 00:14:33.696 4.987 - 5.013: 99.6055% ( 2) 00:14:33.696 5.067 - 5.093: 99.6155% ( 2) 00:14:33.696 5.093 - 5.120: 99.6205% ( 1) 00:14:33.696 5.147 - 5.173: 99.6255% ( 1) 00:14:33.696 5.227 - 5.253: 99.6305% ( 1) 00:14:33.696 5.360 - 5.387: 99.6355% ( 1) 00:14:33.696 5.467 - 5.493: 99.6455% ( 2) 00:14:33.696 5.627 - 5.653: 99.6505% ( 1) 00:14:33.696 5.733 - 5.760: 99.6554% ( 1) 00:14:33.696 5.760 - 5.787: 99.6654% ( 2) 00:14:33.696 5.787 - 5.813: 99.6704% ( 1) 00:14:33.696 5.867 - 5.893: 99.6754% ( 1) 00:14:33.696 5.920 - 5.947: 99.6804% ( 1) 00:14:33.696 5.947 - 5.973: 99.6904% ( 2) 00:14:33.696 5.973 - 6.000: 99.6954% ( 1) 00:14:33.696 6.027 - 6.053: 99.7004% ( 1) 00:14:33.696 6.267 - 6.293: 99.7054% ( 1) 00:14:33.696 6.293 - 6.320: 99.7104% ( 1) 00:14:33.696 6.320 - 6.347: 99.7204% ( 2) 00:14:33.696 6.347 - 6.373: 99.7254% ( 1) 00:14:33.696 6.400 - 6.427: 99.7304% ( 1) 00:14:33.696 6.507 - 6.533: 99.7403% ( 2) 00:14:33.696 6.560 - 6.587: 99.7453% ( 1) 00:14:33.696 6.693 - 6.720: 99.7503% ( 1) 00:14:33.696 6.747 - 6.773: 99.7603% ( 2) 00:14:33.696 6.800 - 6.827: 99.7653% ( 1) 00:14:33.696 6.827 - 6.880: 99.7853% ( 4) 00:14:33.696 6.880 - 6.933: 99.7903% ( 1) 00:14:33.696 6.933 - 6.987: 99.7953% ( 1) 00:14:33.696 6.987 - 7.040: 99.8102% ( 3) 00:14:33.696 7.040 - 7.093: 99.8202% ( 2) 00:14:33.696 7.093 - 7.147: 99.8352% ( 3) 00:14:33.696 7.147 - 7.200: 99.8402% ( 1) 00:14:33.696 7.307 - 7.360: 99.8502% ( 2) 00:14:33.696 7.360 - 7.413: 99.8552% ( 1) 00:14:33.696 7.467 - 7.520: 99.8602% ( 1) 00:14:33.696 7.573 - 7.627: 99.8652% ( 1) 00:14:33.696 [2024-12-05 14:04:39.988456] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.956 7.733 - 7.787: 99.8702% ( 1) 00:14:33.956 7.840 - 7.893: 99.8802% ( 2) 00:14:33.956 7.893 - 7.947: 99.8851% ( 1) 00:14:33.956 8.107 - 8.160: 99.8901% ( 1) 00:14:33.956 8.320 - 8.373: 99.8951% ( 1) 00:14:33.956 8.480 - 8.533: 99.9051% ( 2) 00:14:33.956 11.733 - 11.787: 99.9101% ( 1) 00:14:33.956 13.867 - 13.973: 99.9151% ( 1) 00:14:33.956 15.360 - 15.467: 99.9201% ( 1) 00:14:33.956 119.467 - 120.320: 99.9251% ( 1) 00:14:33.956 3986.773 - 4014.080: 100.0000% ( 15) 00:14:33.956 00:14:33.956 Complete histogram 00:14:33.956 ================== 00:14:33.956 Range in us Cumulative Count 00:14:33.956 1.647 - 1.653: 0.0250% ( 5) 00:14:33.956 1.653 - 1.660: 1.0087% ( 197) 00:14:33.956 1.660 - 1.667: 1.1735% ( 33) 00:14:33.956 1.667 - 1.673: 1.2534% ( 16) 00:14:33.956 1.673 - 1.680: 1.3632% ( 22) 00:14:33.956 1.680 - 1.687: 1.4431% ( 16) 00:14:33.956 1.687 - 1.693: 2.3669% ( 185) 00:14:33.956 1.693 - 1.700: 46.3697% ( 8812) 00:14:33.956 1.700 - 1.707: 53.7601% ( 1480) 00:14:33.956 1.707 - 1.720: 71.5819% ( 3569) 00:14:33.956 1.720 - 1.733: 81.9435% ( 2075) 00:14:33.956 1.733 - 1.747: 83.8160% ( 375) 00:14:33.956 1.747 - 1.760: 87.1068% ( 659) 00:14:33.956 1.760 - 1.773: 92.5047% ( 1081) 00:14:33.956 1.773 - 1.787: 96.6094% ( 822) 00:14:33.956 1.787 - 1.800: 98.7516% ( 429) 00:14:33.956 1.800 - 1.813: 99.3658% ( 123) 00:14:33.956 1.813 - 1.827: 99.4457% ( 16) 00:14:33.956 1.827 - 1.840: 99.4707% ( 5) 00:14:33.956 1.853 - 1.867: 99.4757% ( 1) 00:14:33.956 1.960 - 1.973: 99.4807% ( 1) 00:14:33.956 3.787 - 3.813: 99.4907% ( 2) 00:14:33.956 4.160 - 4.187: 99.4957% ( 1) 00:14:33.956 4.373 - 4.400: 99.5006% ( 1) 00:14:33.956 4.400 - 4.427: 99.5056% ( 1) 00:14:33.956 4.560 - 4.587: 99.5106% ( 1) 00:14:33.956 4.693 - 4.720: 99.5156% ( 1) 00:14:33.956 4.747 - 4.773: 99.5206% ( 1) 00:14:33.956 4.827 - 4.853: 99.5256% ( 1) 00:14:33.956 4.880 - 4.907: 99.5306% ( 1) 00:14:33.956 4.907 - 4.933: 99.5356% ( 1) 00:14:33.956 5.013 - 5.040: 99.5406% ( 1) 00:14:33.956 5.200 - 5.227: 99.5456% ( 1) 00:14:33.956 5.280 - 5.307: 99.5506% ( 1) 00:14:33.956 5.333 - 5.360: 99.5556% ( 1) 00:14:33.956 5.360 - 5.387: 99.5606% ( 1) 00:14:33.956 5.387 - 5.413: 99.5656% ( 1) 00:14:33.956 5.413 - 5.440: 99.5706% ( 1) 00:14:33.956 5.440 - 5.467: 99.5805% ( 2) 00:14:33.956 5.493 - 5.520: 99.5855% ( 1) 00:14:33.956 5.547 - 5.573: 99.5905% ( 1) 00:14:33.956 5.653 - 5.680: 99.5955% ( 1) 00:14:33.956 5.733 - 5.760: 99.6055% ( 2) 00:14:33.956 6.107 - 6.133: 99.6155% ( 2) 00:14:33.956 6.240 - 6.267: 99.6205% ( 1) 00:14:33.956 6.320 - 6.347: 99.6255% ( 1) 00:14:33.956 6.507 - 6.533: 99.6305% ( 1) 00:14:33.956 6.667 - 6.693: 99.6355% ( 1) 00:14:33.956 7.520 - 7.573: 99.6405% ( 1) 00:14:33.956 12.267 - 12.320: 99.6455% ( 1) 00:14:33.957 33.493 - 33.707: 99.6505% ( 1) 00:14:33.957 3986.773 - 4014.080: 99.9950% ( 69) 00:14:33.957 4014.080 - 4041.387: 100.0000% ( 1) 00:14:33.957 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:33.957 [ 00:14:33.957 { 00:14:33.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.957 "subtype": "Discovery", 00:14:33.957 "listen_addresses": [], 00:14:33.957 "allow_any_host": true, 00:14:33.957 "hosts": [] 00:14:33.957 }, 00:14:33.957 { 00:14:33.957 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:33.957 "subtype": "NVMe", 00:14:33.957 "listen_addresses": [ 00:14:33.957 { 00:14:33.957 "trtype": "VFIOUSER", 00:14:33.957 "adrfam": "IPv4", 00:14:33.957 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:33.957 "trsvcid": "0" 00:14:33.957 } 00:14:33.957 ], 00:14:33.957 "allow_any_host": true, 00:14:33.957 "hosts": [], 00:14:33.957 "serial_number": "SPDK1", 00:14:33.957 "model_number": "SPDK bdev Controller", 00:14:33.957 "max_namespaces": 32, 00:14:33.957 "min_cntlid": 1, 00:14:33.957 "max_cntlid": 65519, 00:14:33.957 "namespaces": [ 00:14:33.957 { 00:14:33.957 "nsid": 1, 00:14:33.957 "bdev_name": "Malloc1", 00:14:33.957 "name": "Malloc1", 00:14:33.957 "nguid": "8CA1D1118D3948E6992AE77187A2BD16", 00:14:33.957 "uuid": "8ca1d111-8d39-48e6-992a-e77187a2bd16" 00:14:33.957 } 00:14:33.957 ] 00:14:33.957 }, 00:14:33.957 { 00:14:33.957 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:33.957 "subtype": "NVMe", 00:14:33.957 "listen_addresses": [ 00:14:33.957 { 00:14:33.957 "trtype": "VFIOUSER", 00:14:33.957 "adrfam": "IPv4", 00:14:33.957 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:33.957 "trsvcid": "0" 00:14:33.957 } 00:14:33.957 ], 00:14:33.957 "allow_any_host": true, 00:14:33.957 "hosts": [], 00:14:33.957 "serial_number": "SPDK2", 00:14:33.957 "model_number": "SPDK bdev Controller", 00:14:33.957 "max_namespaces": 32, 00:14:33.957 "min_cntlid": 1, 00:14:33.957 "max_cntlid": 65519, 00:14:33.957 "namespaces": [ 00:14:33.957 { 00:14:33.957 "nsid": 1, 00:14:33.957 "bdev_name": "Malloc2", 00:14:33.957 "name": "Malloc2", 00:14:33.957 "nguid": "9D5A883830044176B1E76C2ADE0CF78C", 00:14:33.957 "uuid": "9d5a8838-3004-4176-b1e7-6c2ade0cf78c" 00:14:33.957 } 00:14:33.957 ] 00:14:33.957 } 00:14:33.957 ] 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2681442 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:33.957 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:34.217 [2024-12-05 14:04:40.373878] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.217 Malloc3 00:14:34.217 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:34.478 [2024-12-05 14:04:40.568246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.478 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:34.478 Asynchronous Event Request test 00:14:34.478 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.478 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:34.478 Registering asynchronous event callbacks... 00:14:34.478 Starting namespace attribute notice tests for all controllers... 00:14:34.478 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:34.478 aer_cb - Changed Namespace 00:14:34.478 Cleaning up... 00:14:34.478 [ 00:14:34.478 { 00:14:34.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:34.478 "subtype": "Discovery", 00:14:34.478 "listen_addresses": [], 00:14:34.478 "allow_any_host": true, 00:14:34.478 "hosts": [] 00:14:34.478 }, 00:14:34.478 { 00:14:34.478 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:34.478 "subtype": "NVMe", 00:14:34.478 "listen_addresses": [ 00:14:34.478 { 00:14:34.478 "trtype": "VFIOUSER", 00:14:34.478 "adrfam": "IPv4", 00:14:34.478 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:34.478 "trsvcid": "0" 00:14:34.478 } 00:14:34.478 ], 00:14:34.478 "allow_any_host": true, 00:14:34.479 "hosts": [], 00:14:34.479 "serial_number": "SPDK1", 00:14:34.479 "model_number": "SPDK bdev Controller", 00:14:34.479 "max_namespaces": 32, 00:14:34.479 "min_cntlid": 1, 00:14:34.479 "max_cntlid": 65519, 00:14:34.479 "namespaces": [ 00:14:34.479 { 00:14:34.479 "nsid": 1, 00:14:34.479 "bdev_name": "Malloc1", 00:14:34.479 "name": "Malloc1", 00:14:34.479 "nguid": "8CA1D1118D3948E6992AE77187A2BD16", 00:14:34.479 "uuid": "8ca1d111-8d39-48e6-992a-e77187a2bd16" 00:14:34.479 }, 00:14:34.479 { 00:14:34.479 "nsid": 2, 00:14:34.479 "bdev_name": "Malloc3", 00:14:34.479 "name": "Malloc3", 00:14:34.479 "nguid": "87F847611F704B7EA5A62E47666934A0", 00:14:34.479 "uuid": "87f84761-1f70-4b7e-a5a6-2e47666934a0" 00:14:34.479 } 00:14:34.479 ] 00:14:34.479 }, 00:14:34.479 { 00:14:34.479 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:34.479 "subtype": "NVMe", 00:14:34.479 "listen_addresses": [ 00:14:34.479 { 00:14:34.479 "trtype": "VFIOUSER", 00:14:34.479 "adrfam": "IPv4", 00:14:34.479 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:34.479 "trsvcid": "0" 00:14:34.479 } 00:14:34.479 ], 00:14:34.479 "allow_any_host": true, 00:14:34.479 "hosts": [], 00:14:34.479 "serial_number": "SPDK2", 00:14:34.479 "model_number": "SPDK bdev Controller", 00:14:34.479 "max_namespaces": 32, 00:14:34.479 "min_cntlid": 1, 00:14:34.479 "max_cntlid": 65519, 00:14:34.479 "namespaces": [ 00:14:34.479 { 00:14:34.479 "nsid": 1, 00:14:34.479 "bdev_name": "Malloc2", 00:14:34.479 "name": "Malloc2", 00:14:34.479 "nguid": "9D5A883830044176B1E76C2ADE0CF78C", 00:14:34.479 "uuid": "9d5a8838-3004-4176-b1e7-6c2ade0cf78c" 00:14:34.479 } 00:14:34.479 ] 00:14:34.479 } 00:14:34.479 ] 00:14:34.479 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2681442 00:14:34.479 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:34.479 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:34.479 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:34.479 14:04:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:34.742 [2024-12-05 14:04:40.793709] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:14:34.742 [2024-12-05 14:04:40.793758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681452 ] 00:14:34.742 [2024-12-05 14:04:40.834726] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:34.742 [2024-12-05 14:04:40.836907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:34.742 [2024-12-05 14:04:40.836926] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe981c82000 00:14:34.742 [2024-12-05 14:04:40.837907] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.838909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.839914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.840915] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.841924] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.842931] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.843936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.844942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:34.742 [2024-12-05 14:04:40.845948] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:34.742 [2024-12-05 14:04:40.845955] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe981c77000 00:14:34.742 [2024-12-05 14:04:40.846868] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:34.742 [2024-12-05 14:04:40.859741] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:34.742 [2024-12-05 14:04:40.859762] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:34.742 [2024-12-05 14:04:40.864836] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:34.742 [2024-12-05 14:04:40.864874] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:34.742 [2024-12-05 14:04:40.864935] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:34.742 [2024-12-05 14:04:40.864949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:34.742 [2024-12-05 14:04:40.864953] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:34.742 [2024-12-05 14:04:40.865839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:34.742 [2024-12-05 14:04:40.865848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:34.742 [2024-12-05 14:04:40.865854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:34.742 [2024-12-05 14:04:40.866844] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:34.742 [2024-12-05 14:04:40.866852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:34.742 [2024-12-05 14:04:40.866857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:34.742 [2024-12-05 14:04:40.867854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:34.742 [2024-12-05 14:04:40.867862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:34.742 [2024-12-05 14:04:40.868859] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:34.742 [2024-12-05 14:04:40.868865] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:34.742 [2024-12-05 14:04:40.868869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:34.742 [2024-12-05 14:04:40.868874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:34.742 [2024-12-05 14:04:40.868980] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:34.742 [2024-12-05 14:04:40.868984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:34.742 [2024-12-05 14:04:40.868988] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:34.742 [2024-12-05 14:04:40.869866] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:34.742 [2024-12-05 14:04:40.870872] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:34.742 [2024-12-05 14:04:40.871884] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:34.742 [2024-12-05 14:04:40.872884] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:34.742 [2024-12-05 14:04:40.872914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:34.742 [2024-12-05 14:04:40.873892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:34.742 [2024-12-05 14:04:40.873901] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:34.742 [2024-12-05 14:04:40.873904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:34.742 [2024-12-05 14:04:40.873919] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:34.742 [2024-12-05 14:04:40.873925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:34.742 [2024-12-05 14:04:40.873936] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.742 [2024-12-05 14:04:40.873940] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.742 [2024-12-05 14:04:40.873944] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.742 [2024-12-05 14:04:40.873954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.742 [2024-12-05 14:04:40.881463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:34.742 [2024-12-05 14:04:40.881472] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:34.742 [2024-12-05 14:04:40.881476] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:34.742 [2024-12-05 14:04:40.881479] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:34.742 [2024-12-05 14:04:40.881482] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:34.742 [2024-12-05 14:04:40.881486] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:34.742 [2024-12-05 14:04:40.881489] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:34.742 [2024-12-05 14:04:40.881493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:34.742 [2024-12-05 14:04:40.881499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.881506] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.889459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.889470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.743 [2024-12-05 14:04:40.889476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.743 [2024-12-05 14:04:40.889482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.743 [2024-12-05 14:04:40.889490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.743 [2024-12-05 14:04:40.889494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.889501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.889507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.897459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.897466] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:34.743 [2024-12-05 14:04:40.897470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.897477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.897481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.897489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.905460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.905509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.905515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.905521] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:34.743 [2024-12-05 14:04:40.905524] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:34.743 [2024-12-05 14:04:40.905527] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.743 [2024-12-05 14:04:40.905531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.913460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.913471] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:34.743 [2024-12-05 14:04:40.913481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.913487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.913492] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.743 [2024-12-05 14:04:40.913496] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.743 [2024-12-05 14:04:40.913499] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.743 [2024-12-05 14:04:40.913504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.921461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.921471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.921477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.921483] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:34.743 [2024-12-05 14:04:40.921486] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.743 [2024-12-05 14:04:40.921488] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.743 [2024-12-05 14:04:40.921493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.929459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.929468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929497] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:34.743 [2024-12-05 14:04:40.929501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:34.743 [2024-12-05 14:04:40.929505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:34.743 [2024-12-05 14:04:40.929517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.937459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.937469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.945462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.945472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.953461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.953472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.961460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.961473] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:34.743 [2024-12-05 14:04:40.961476] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:34.743 [2024-12-05 14:04:40.961479] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:34.743 [2024-12-05 14:04:40.961481] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:34.743 [2024-12-05 14:04:40.961484] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:34.743 [2024-12-05 14:04:40.961488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:34.743 [2024-12-05 14:04:40.961494] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:34.743 [2024-12-05 14:04:40.961497] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:34.743 [2024-12-05 14:04:40.961499] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.743 [2024-12-05 14:04:40.961503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.961509] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:34.743 [2024-12-05 14:04:40.961512] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:34.743 [2024-12-05 14:04:40.961514] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.743 [2024-12-05 14:04:40.961520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.961526] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:34.743 [2024-12-05 14:04:40.961529] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:34.743 [2024-12-05 14:04:40.961531] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:34.743 [2024-12-05 14:04:40.961535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:34.743 [2024-12-05 14:04:40.969462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.969474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.969481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:34.743 [2024-12-05 14:04:40.969486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:34.743 ===================================================== 00:14:34.743 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:34.743 ===================================================== 00:14:34.743 Controller Capabilities/Features 00:14:34.743 ================================ 00:14:34.744 Vendor ID: 4e58 00:14:34.744 Subsystem Vendor ID: 4e58 00:14:34.744 Serial Number: SPDK2 00:14:34.744 Model Number: SPDK bdev Controller 00:14:34.744 Firmware Version: 25.01 00:14:34.744 Recommended Arb Burst: 6 00:14:34.744 IEEE OUI Identifier: 8d 6b 50 00:14:34.744 Multi-path I/O 00:14:34.744 May have multiple subsystem ports: Yes 00:14:34.744 May have multiple controllers: Yes 00:14:34.744 Associated with SR-IOV VF: No 00:14:34.744 Max Data Transfer Size: 131072 00:14:34.744 Max Number of Namespaces: 32 00:14:34.744 Max Number of I/O Queues: 127 00:14:34.744 NVMe Specification Version (VS): 1.3 00:14:34.744 NVMe Specification Version (Identify): 1.3 00:14:34.744 Maximum Queue Entries: 256 00:14:34.744 Contiguous Queues Required: Yes 00:14:34.744 Arbitration Mechanisms Supported 00:14:34.744 Weighted Round Robin: Not Supported 00:14:34.744 Vendor Specific: Not Supported 00:14:34.744 Reset Timeout: 15000 ms 00:14:34.744 Doorbell Stride: 4 bytes 00:14:34.744 NVM Subsystem Reset: Not Supported 00:14:34.744 Command Sets Supported 00:14:34.744 NVM Command Set: Supported 00:14:34.744 Boot Partition: Not Supported 00:14:34.744 Memory Page Size Minimum: 4096 bytes 00:14:34.744 Memory Page Size Maximum: 4096 bytes 00:14:34.744 Persistent Memory Region: Not Supported 00:14:34.744 Optional Asynchronous Events Supported 00:14:34.744 Namespace Attribute Notices: Supported 00:14:34.744 Firmware Activation Notices: Not Supported 00:14:34.744 ANA Change Notices: Not Supported 00:14:34.744 PLE Aggregate Log Change Notices: Not Supported 00:14:34.744 LBA Status Info Alert Notices: Not Supported 00:14:34.744 EGE Aggregate Log Change Notices: Not Supported 00:14:34.744 Normal NVM Subsystem Shutdown event: Not Supported 00:14:34.744 Zone Descriptor Change Notices: Not Supported 00:14:34.744 Discovery Log Change Notices: Not Supported 00:14:34.744 Controller Attributes 00:14:34.744 128-bit Host Identifier: Supported 00:14:34.744 Non-Operational Permissive Mode: Not Supported 00:14:34.744 NVM Sets: Not Supported 00:14:34.744 Read Recovery Levels: Not Supported 00:14:34.744 Endurance Groups: Not Supported 00:14:34.744 Predictable Latency Mode: Not Supported 00:14:34.744 Traffic Based Keep ALive: Not Supported 00:14:34.744 Namespace Granularity: Not Supported 00:14:34.744 SQ Associations: Not Supported 00:14:34.744 UUID List: Not Supported 00:14:34.744 Multi-Domain Subsystem: Not Supported 00:14:34.744 Fixed Capacity Management: Not Supported 00:14:34.744 Variable Capacity Management: Not Supported 00:14:34.744 Delete Endurance Group: Not Supported 00:14:34.744 Delete NVM Set: Not Supported 00:14:34.744 Extended LBA Formats Supported: Not Supported 00:14:34.744 Flexible Data Placement Supported: Not Supported 00:14:34.744 00:14:34.744 Controller Memory Buffer Support 00:14:34.744 ================================ 00:14:34.744 Supported: No 00:14:34.744 00:14:34.744 Persistent Memory Region Support 00:14:34.744 ================================ 00:14:34.744 Supported: No 00:14:34.744 00:14:34.744 Admin Command Set Attributes 00:14:34.744 ============================ 00:14:34.744 Security Send/Receive: Not Supported 00:14:34.744 Format NVM: Not Supported 00:14:34.744 Firmware Activate/Download: Not Supported 00:14:34.744 Namespace Management: Not Supported 00:14:34.744 Device Self-Test: Not Supported 00:14:34.744 Directives: Not Supported 00:14:34.744 NVMe-MI: Not Supported 00:14:34.744 Virtualization Management: Not Supported 00:14:34.744 Doorbell Buffer Config: Not Supported 00:14:34.744 Get LBA Status Capability: Not Supported 00:14:34.744 Command & Feature Lockdown Capability: Not Supported 00:14:34.744 Abort Command Limit: 4 00:14:34.744 Async Event Request Limit: 4 00:14:34.744 Number of Firmware Slots: N/A 00:14:34.744 Firmware Slot 1 Read-Only: N/A 00:14:34.744 Firmware Activation Without Reset: N/A 00:14:34.744 Multiple Update Detection Support: N/A 00:14:34.744 Firmware Update Granularity: No Information Provided 00:14:34.744 Per-Namespace SMART Log: No 00:14:34.744 Asymmetric Namespace Access Log Page: Not Supported 00:14:34.744 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:34.744 Command Effects Log Page: Supported 00:14:34.744 Get Log Page Extended Data: Supported 00:14:34.744 Telemetry Log Pages: Not Supported 00:14:34.744 Persistent Event Log Pages: Not Supported 00:14:34.744 Supported Log Pages Log Page: May Support 00:14:34.744 Commands Supported & Effects Log Page: Not Supported 00:14:34.744 Feature Identifiers & Effects Log Page:May Support 00:14:34.744 NVMe-MI Commands & Effects Log Page: May Support 00:14:34.744 Data Area 4 for Telemetry Log: Not Supported 00:14:34.744 Error Log Page Entries Supported: 128 00:14:34.744 Keep Alive: Supported 00:14:34.744 Keep Alive Granularity: 10000 ms 00:14:34.744 00:14:34.744 NVM Command Set Attributes 00:14:34.744 ========================== 00:14:34.744 Submission Queue Entry Size 00:14:34.744 Max: 64 00:14:34.744 Min: 64 00:14:34.744 Completion Queue Entry Size 00:14:34.744 Max: 16 00:14:34.744 Min: 16 00:14:34.744 Number of Namespaces: 32 00:14:34.744 Compare Command: Supported 00:14:34.744 Write Uncorrectable Command: Not Supported 00:14:34.744 Dataset Management Command: Supported 00:14:34.744 Write Zeroes Command: Supported 00:14:34.744 Set Features Save Field: Not Supported 00:14:34.744 Reservations: Not Supported 00:14:34.744 Timestamp: Not Supported 00:14:34.744 Copy: Supported 00:14:34.744 Volatile Write Cache: Present 00:14:34.744 Atomic Write Unit (Normal): 1 00:14:34.744 Atomic Write Unit (PFail): 1 00:14:34.744 Atomic Compare & Write Unit: 1 00:14:34.744 Fused Compare & Write: Supported 00:14:34.744 Scatter-Gather List 00:14:34.744 SGL Command Set: Supported (Dword aligned) 00:14:34.744 SGL Keyed: Not Supported 00:14:34.744 SGL Bit Bucket Descriptor: Not Supported 00:14:34.744 SGL Metadata Pointer: Not Supported 00:14:34.744 Oversized SGL: Not Supported 00:14:34.744 SGL Metadata Address: Not Supported 00:14:34.744 SGL Offset: Not Supported 00:14:34.744 Transport SGL Data Block: Not Supported 00:14:34.744 Replay Protected Memory Block: Not Supported 00:14:34.744 00:14:34.744 Firmware Slot Information 00:14:34.744 ========================= 00:14:34.744 Active slot: 1 00:14:34.744 Slot 1 Firmware Revision: 25.01 00:14:34.744 00:14:34.744 00:14:34.744 Commands Supported and Effects 00:14:34.744 ============================== 00:14:34.744 Admin Commands 00:14:34.744 -------------- 00:14:34.744 Get Log Page (02h): Supported 00:14:34.744 Identify (06h): Supported 00:14:34.744 Abort (08h): Supported 00:14:34.744 Set Features (09h): Supported 00:14:34.744 Get Features (0Ah): Supported 00:14:34.744 Asynchronous Event Request (0Ch): Supported 00:14:34.744 Keep Alive (18h): Supported 00:14:34.744 I/O Commands 00:14:34.744 ------------ 00:14:34.744 Flush (00h): Supported LBA-Change 00:14:34.744 Write (01h): Supported LBA-Change 00:14:34.744 Read (02h): Supported 00:14:34.744 Compare (05h): Supported 00:14:34.744 Write Zeroes (08h): Supported LBA-Change 00:14:34.744 Dataset Management (09h): Supported LBA-Change 00:14:34.744 Copy (19h): Supported LBA-Change 00:14:34.744 00:14:34.744 Error Log 00:14:34.744 ========= 00:14:34.744 00:14:34.744 Arbitration 00:14:34.744 =========== 00:14:34.744 Arbitration Burst: 1 00:14:34.744 00:14:34.744 Power Management 00:14:34.744 ================ 00:14:34.744 Number of Power States: 1 00:14:34.744 Current Power State: Power State #0 00:14:34.744 Power State #0: 00:14:34.744 Max Power: 0.00 W 00:14:34.744 Non-Operational State: Operational 00:14:34.744 Entry Latency: Not Reported 00:14:34.744 Exit Latency: Not Reported 00:14:34.744 Relative Read Throughput: 0 00:14:34.744 Relative Read Latency: 0 00:14:34.744 Relative Write Throughput: 0 00:14:34.744 Relative Write Latency: 0 00:14:34.744 Idle Power: Not Reported 00:14:34.744 Active Power: Not Reported 00:14:34.744 Non-Operational Permissive Mode: Not Supported 00:14:34.744 00:14:34.744 Health Information 00:14:34.745 ================== 00:14:34.745 Critical Warnings: 00:14:34.745 Available Spare Space: OK 00:14:34.745 Temperature: OK 00:14:34.745 Device Reliability: OK 00:14:34.745 Read Only: No 00:14:34.745 Volatile Memory Backup: OK 00:14:34.745 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:34.745 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:34.745 Available Spare: 0% 00:14:34.745 Available Sp[2024-12-05 14:04:40.969559] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:34.745 [2024-12-05 14:04:40.977461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:34.745 [2024-12-05 14:04:40.977485] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:34.745 [2024-12-05 14:04:40.977492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.745 [2024-12-05 14:04:40.977497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.745 [2024-12-05 14:04:40.977501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.745 [2024-12-05 14:04:40.977506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.745 [2024-12-05 14:04:40.977535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:34.745 [2024-12-05 14:04:40.977543] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:34.745 [2024-12-05 14:04:40.978539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:34.745 [2024-12-05 14:04:40.978576] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:34.745 [2024-12-05 14:04:40.978580] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:34.745 [2024-12-05 14:04:40.979547] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:34.745 [2024-12-05 14:04:40.979556] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:34.745 [2024-12-05 14:04:40.979599] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:34.745 [2024-12-05 14:04:40.980565] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:34.745 are Threshold: 0% 00:14:34.745 Life Percentage Used: 0% 00:14:34.745 Data Units Read: 0 00:14:34.745 Data Units Written: 0 00:14:34.745 Host Read Commands: 0 00:14:34.745 Host Write Commands: 0 00:14:34.745 Controller Busy Time: 0 minutes 00:14:34.745 Power Cycles: 0 00:14:34.745 Power On Hours: 0 hours 00:14:34.745 Unsafe Shutdowns: 0 00:14:34.745 Unrecoverable Media Errors: 0 00:14:34.745 Lifetime Error Log Entries: 0 00:14:34.745 Warning Temperature Time: 0 minutes 00:14:34.745 Critical Temperature Time: 0 minutes 00:14:34.745 00:14:34.745 Number of Queues 00:14:34.745 ================ 00:14:34.745 Number of I/O Submission Queues: 127 00:14:34.745 Number of I/O Completion Queues: 127 00:14:34.745 00:14:34.745 Active Namespaces 00:14:34.745 ================= 00:14:34.745 Namespace ID:1 00:14:34.745 Error Recovery Timeout: Unlimited 00:14:34.745 Command Set Identifier: NVM (00h) 00:14:34.745 Deallocate: Supported 00:14:34.745 Deallocated/Unwritten Error: Not Supported 00:14:34.745 Deallocated Read Value: Unknown 00:14:34.745 Deallocate in Write Zeroes: Not Supported 00:14:34.745 Deallocated Guard Field: 0xFFFF 00:14:34.745 Flush: Supported 00:14:34.745 Reservation: Supported 00:14:34.745 Namespace Sharing Capabilities: Multiple Controllers 00:14:34.745 Size (in LBAs): 131072 (0GiB) 00:14:34.745 Capacity (in LBAs): 131072 (0GiB) 00:14:34.745 Utilization (in LBAs): 131072 (0GiB) 00:14:34.745 NGUID: 9D5A883830044176B1E76C2ADE0CF78C 00:14:34.745 UUID: 9d5a8838-3004-4176-b1e7-6c2ade0cf78c 00:14:34.745 Thin Provisioning: Not Supported 00:14:34.745 Per-NS Atomic Units: Yes 00:14:34.745 Atomic Boundary Size (Normal): 0 00:14:34.745 Atomic Boundary Size (PFail): 0 00:14:34.745 Atomic Boundary Offset: 0 00:14:34.745 Maximum Single Source Range Length: 65535 00:14:34.745 Maximum Copy Length: 65535 00:14:34.745 Maximum Source Range Count: 1 00:14:34.745 NGUID/EUI64 Never Reused: No 00:14:34.745 Namespace Write Protected: No 00:14:34.745 Number of LBA Formats: 1 00:14:34.745 Current LBA Format: LBA Format #00 00:14:34.745 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:34.745 00:14:34.745 14:04:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:35.006 [2024-12-05 14:04:41.169858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:40.291 Initializing NVMe Controllers 00:14:40.291 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:40.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:40.291 Initialization complete. Launching workers. 00:14:40.291 ======================================================== 00:14:40.291 Latency(us) 00:14:40.291 Device Information : IOPS MiB/s Average min max 00:14:40.291 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39966.04 156.12 3202.59 864.28 6779.61 00:14:40.291 ======================================================== 00:14:40.291 Total : 39966.04 156.12 3202.59 864.28 6779.61 00:14:40.291 00:14:40.291 [2024-12-05 14:04:46.275654] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:40.291 14:04:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:40.291 [2024-12-05 14:04:46.469222] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:45.572 Initializing NVMe Controllers 00:14:45.572 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:45.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:45.572 Initialization complete. Launching workers. 00:14:45.572 ======================================================== 00:14:45.572 Latency(us) 00:14:45.572 Device Information : IOPS MiB/s Average min max 00:14:45.572 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39986.15 156.20 3200.99 863.99 8703.35 00:14:45.572 ======================================================== 00:14:45.572 Total : 39986.15 156.20 3200.99 863.99 8703.35 00:14:45.572 00:14:45.572 [2024-12-05 14:04:51.485378] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:45.572 14:04:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:45.572 [2024-12-05 14:04:51.690560] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.859 [2024-12-05 14:04:56.824550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.859 Initializing NVMe Controllers 00:14:50.859 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.859 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.859 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:50.859 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:50.859 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:50.859 Initialization complete. Launching workers. 00:14:50.859 Starting thread on core 2 00:14:50.859 Starting thread on core 3 00:14:50.859 Starting thread on core 1 00:14:50.859 14:04:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:50.859 [2024-12-05 14:04:57.076831] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.202 [2024-12-05 14:05:00.162227] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.202 Initializing NVMe Controllers 00:14:54.202 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.202 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.202 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:54.202 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:54.202 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:54.202 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:54.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:54.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:54.202 Initialization complete. Launching workers. 00:14:54.202 Starting thread on core 1 with urgent priority queue 00:14:54.202 Starting thread on core 2 with urgent priority queue 00:14:54.202 Starting thread on core 3 with urgent priority queue 00:14:54.202 Starting thread on core 0 with urgent priority queue 00:14:54.202 SPDK bdev Controller (SPDK2 ) core 0: 15102.67 IO/s 6.62 secs/100000 ios 00:14:54.202 SPDK bdev Controller (SPDK2 ) core 1: 14537.67 IO/s 6.88 secs/100000 ios 00:14:54.202 SPDK bdev Controller (SPDK2 ) core 2: 7901.33 IO/s 12.66 secs/100000 ios 00:14:54.202 SPDK bdev Controller (SPDK2 ) core 3: 10521.33 IO/s 9.50 secs/100000 ios 00:14:54.202 ======================================================== 00:14:54.202 00:14:54.202 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:54.202 [2024-12-05 14:05:00.402834] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:54.202 Initializing NVMe Controllers 00:14:54.202 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.202 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:54.202 Namespace ID: 1 size: 0GB 00:14:54.202 Initialization complete. 00:14:54.202 INFO: using host memory buffer for IO 00:14:54.202 Hello world! 00:14:54.202 [2024-12-05 14:05:00.412905] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:54.203 14:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:54.464 [2024-12-05 14:05:00.651173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.849 Initializing NVMe Controllers 00:14:55.849 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.849 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:55.849 Initialization complete. Launching workers. 00:14:55.849 submit (in ns) avg, min, max = 6377.1, 2815.8, 4995735.0 00:14:55.849 complete (in ns) avg, min, max = 15473.0, 1649.2, 4994523.3 00:14:55.849 00:14:55.849 Submit histogram 00:14:55.849 ================ 00:14:55.849 Range in us Cumulative Count 00:14:55.849 2.813 - 2.827: 0.3456% ( 70) 00:14:55.849 2.827 - 2.840: 1.2393% ( 181) 00:14:55.849 2.840 - 2.853: 3.7624% ( 511) 00:14:55.849 2.853 - 2.867: 8.0729% ( 873) 00:14:55.849 2.867 - 2.880: 13.6325% ( 1126) 00:14:55.849 2.880 - 2.893: 18.7923% ( 1045) 00:14:55.849 2.893 - 2.907: 23.7743% ( 1009) 00:14:55.849 2.907 - 2.920: 28.6525% ( 988) 00:14:55.849 2.920 - 2.933: 34.4640% ( 1177) 00:14:55.849 2.933 - 2.947: 39.6534% ( 1051) 00:14:55.849 2.947 - 2.960: 44.8674% ( 1056) 00:14:55.849 2.960 - 2.973: 49.6568% ( 970) 00:14:55.849 2.973 - 2.987: 56.7471% ( 1436) 00:14:55.849 2.987 - 3.000: 65.4866% ( 1770) 00:14:55.849 3.000 - 3.013: 75.1987% ( 1967) 00:14:55.849 3.013 - 3.027: 83.0741% ( 1595) 00:14:55.849 3.027 - 3.040: 89.1769% ( 1236) 00:14:55.849 3.040 - 3.053: 93.8133% ( 939) 00:14:55.849 3.053 - 3.067: 96.7017% ( 585) 00:14:55.849 3.067 - 3.080: 98.2324% ( 310) 00:14:55.849 3.080 - 3.093: 98.9829% ( 152) 00:14:55.849 3.093 - 3.107: 99.3186% ( 68) 00:14:55.849 3.107 - 3.120: 99.4272% ( 22) 00:14:55.849 3.120 - 3.133: 99.5062% ( 16) 00:14:55.849 3.133 - 3.147: 99.5359% ( 6) 00:14:55.849 3.147 - 3.160: 99.5556% ( 4) 00:14:55.849 3.160 - 3.173: 99.5606% ( 1) 00:14:55.849 3.253 - 3.267: 99.5655% ( 1) 00:14:55.849 3.320 - 3.333: 99.5704% ( 1) 00:14:55.849 3.547 - 3.573: 99.5754% ( 1) 00:14:55.849 3.760 - 3.787: 99.5852% ( 2) 00:14:55.849 4.107 - 4.133: 99.5902% ( 1) 00:14:55.849 4.133 - 4.160: 99.5951% ( 1) 00:14:55.849 4.213 - 4.240: 99.6001% ( 1) 00:14:55.849 4.267 - 4.293: 99.6050% ( 1) 00:14:55.849 4.480 - 4.507: 99.6099% ( 1) 00:14:55.849 4.507 - 4.533: 99.6198% ( 2) 00:14:55.849 4.587 - 4.613: 99.6247% ( 1) 00:14:55.849 4.613 - 4.640: 99.6297% ( 1) 00:14:55.849 4.640 - 4.667: 99.6346% ( 1) 00:14:55.849 4.693 - 4.720: 99.6494% ( 3) 00:14:55.849 4.747 - 4.773: 99.6544% ( 1) 00:14:55.849 4.773 - 4.800: 99.6593% ( 1) 00:14:55.849 4.827 - 4.853: 99.6642% ( 1) 00:14:55.849 4.853 - 4.880: 99.6692% ( 1) 00:14:55.849 4.880 - 4.907: 99.6791% ( 2) 00:14:55.849 4.933 - 4.960: 99.6889% ( 2) 00:14:55.849 4.987 - 5.013: 99.6939% ( 1) 00:14:55.849 5.013 - 5.040: 99.7037% ( 2) 00:14:55.849 5.040 - 5.067: 99.7087% ( 1) 00:14:55.849 5.067 - 5.093: 99.7136% ( 1) 00:14:55.849 5.093 - 5.120: 99.7186% ( 1) 00:14:55.849 5.120 - 5.147: 99.7235% ( 1) 00:14:55.849 5.147 - 5.173: 99.7284% ( 1) 00:14:55.849 5.200 - 5.227: 99.7383% ( 2) 00:14:55.849 5.280 - 5.307: 99.7482% ( 2) 00:14:55.849 5.413 - 5.440: 99.7531% ( 1) 00:14:55.849 5.440 - 5.467: 99.7581% ( 1) 00:14:55.849 5.600 - 5.627: 99.7630% ( 1) 00:14:55.849 5.680 - 5.707: 99.7778% ( 3) 00:14:55.849 5.787 - 5.813: 99.7877% ( 2) 00:14:55.849 5.840 - 5.867: 99.7926% ( 1) 00:14:55.849 5.893 - 5.920: 99.7976% ( 1) 00:14:55.849 6.000 - 6.027: 99.8074% ( 2) 00:14:55.849 6.080 - 6.107: 99.8124% ( 1) 00:14:55.849 6.107 - 6.133: 99.8222% ( 2) 00:14:55.849 6.240 - 6.267: 99.8321% ( 2) 00:14:55.849 6.267 - 6.293: 99.8371% ( 1) 00:14:55.849 6.320 - 6.347: 99.8420% ( 1) 00:14:55.849 6.347 - 6.373: 99.8469% ( 1) 00:14:55.849 6.453 - 6.480: 99.8519% ( 1) 00:14:55.849 6.560 - 6.587: 99.8568% ( 1) 00:14:55.849 6.613 - 6.640: 99.8617% ( 1) 00:14:55.849 6.693 - 6.720: 99.8667% ( 1) 00:14:55.849 6.747 - 6.773: 99.8716% ( 1) 00:14:55.849 6.880 - 6.933: 99.8766% ( 1) 00:14:55.849 6.987 - 7.040: 99.8815% ( 1) 00:14:55.849 7.040 - 7.093: 99.8864% ( 1) 00:14:55.849 7.093 - 7.147: 99.8914% ( 1) 00:14:55.849 [2024-12-05 14:05:01.741960] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.849 7.467 - 7.520: 99.8963% ( 1) 00:14:55.849 7.573 - 7.627: 99.9012% ( 1) 00:14:55.849 7.947 - 8.000: 99.9062% ( 1) 00:14:55.849 8.853 - 8.907: 99.9111% ( 1) 00:14:55.849 9.120 - 9.173: 99.9161% ( 1) 00:14:55.849 3986.773 - 4014.080: 99.9901% ( 15) 00:14:55.849 4068.693 - 4096.000: 99.9951% ( 1) 00:14:55.849 4969.813 - 4997.120: 100.0000% ( 1) 00:14:55.849 00:14:55.849 Complete histogram 00:14:55.849 ================== 00:14:55.849 Range in us Cumulative Count 00:14:55.849 1.647 - 1.653: 0.1481% ( 30) 00:14:55.849 1.653 - 1.660: 0.9974% ( 172) 00:14:55.849 1.660 - 1.667: 1.0714% ( 15) 00:14:55.849 1.667 - 1.673: 1.2048% ( 27) 00:14:55.849 1.673 - 1.680: 1.3677% ( 33) 00:14:55.849 1.680 - 1.687: 1.4269% ( 12) 00:14:55.849 1.687 - 1.693: 1.4566% ( 6) 00:14:55.849 1.693 - 1.700: 1.4714% ( 3) 00:14:55.849 1.700 - 1.707: 33.3383% ( 6454) 00:14:55.849 1.707 - 1.720: 61.2798% ( 5659) 00:14:55.849 1.720 - 1.733: 77.8008% ( 3346) 00:14:55.849 1.733 - 1.747: 83.0247% ( 1058) 00:14:55.849 1.747 - 1.760: 84.4122% ( 281) 00:14:55.849 1.760 - 1.773: 88.1450% ( 756) 00:14:55.849 1.773 - 1.787: 93.3541% ( 1055) 00:14:55.849 1.787 - 1.800: 97.1412% ( 767) 00:14:55.849 1.800 - 1.813: 98.6471% ( 305) 00:14:55.849 1.813 - 1.827: 99.3680% ( 146) 00:14:55.849 1.827 - 1.840: 99.4914% ( 25) 00:14:55.849 1.840 - 1.853: 99.5211% ( 6) 00:14:55.849 1.853 - 1.867: 99.5260% ( 1) 00:14:55.849 2.000 - 2.013: 99.5309% ( 1) 00:14:55.849 3.253 - 3.267: 99.5359% ( 1) 00:14:55.849 3.440 - 3.467: 99.5408% ( 1) 00:14:55.849 3.493 - 3.520: 99.5457% ( 1) 00:14:55.849 3.547 - 3.573: 99.5507% ( 1) 00:14:55.849 3.573 - 3.600: 99.5556% ( 1) 00:14:55.849 3.920 - 3.947: 99.5606% ( 1) 00:14:55.850 4.053 - 4.080: 99.5655% ( 1) 00:14:55.850 4.240 - 4.267: 99.5704% ( 1) 00:14:55.850 4.347 - 4.373: 99.5852% ( 3) 00:14:55.850 4.400 - 4.427: 99.5902% ( 1) 00:14:55.850 4.613 - 4.640: 99.5951% ( 1) 00:14:55.850 4.800 - 4.827: 99.6050% ( 2) 00:14:55.850 4.853 - 4.880: 99.6099% ( 1) 00:14:55.850 5.093 - 5.120: 99.6149% ( 1) 00:14:55.850 5.387 - 5.413: 99.6198% ( 1) 00:14:55.850 5.413 - 5.440: 99.6247% ( 1) 00:14:55.850 5.627 - 5.653: 99.6297% ( 1) 00:14:55.850 5.707 - 5.733: 99.6346% ( 1) 00:14:55.850 7.840 - 7.893: 99.6396% ( 1) 00:14:55.850 10.187 - 10.240: 99.6445% ( 1) 00:14:55.850 11.520 - 11.573: 99.6494% ( 1) 00:14:55.850 34.133 - 34.347: 99.6544% ( 1) 00:14:55.850 2717.013 - 2730.667: 99.6593% ( 1) 00:14:55.850 3017.387 - 3031.040: 99.6642% ( 1) 00:14:55.850 3986.773 - 4014.080: 99.9951% ( 67) 00:14:55.850 4969.813 - 4997.120: 100.0000% ( 1) 00:14:55.850 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:55.850 [ 00:14:55.850 { 00:14:55.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:55.850 "subtype": "Discovery", 00:14:55.850 "listen_addresses": [], 00:14:55.850 "allow_any_host": true, 00:14:55.850 "hosts": [] 00:14:55.850 }, 00:14:55.850 { 00:14:55.850 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:55.850 "subtype": "NVMe", 00:14:55.850 "listen_addresses": [ 00:14:55.850 { 00:14:55.850 "trtype": "VFIOUSER", 00:14:55.850 "adrfam": "IPv4", 00:14:55.850 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:55.850 "trsvcid": "0" 00:14:55.850 } 00:14:55.850 ], 00:14:55.850 "allow_any_host": true, 00:14:55.850 "hosts": [], 00:14:55.850 "serial_number": "SPDK1", 00:14:55.850 "model_number": "SPDK bdev Controller", 00:14:55.850 "max_namespaces": 32, 00:14:55.850 "min_cntlid": 1, 00:14:55.850 "max_cntlid": 65519, 00:14:55.850 "namespaces": [ 00:14:55.850 { 00:14:55.850 "nsid": 1, 00:14:55.850 "bdev_name": "Malloc1", 00:14:55.850 "name": "Malloc1", 00:14:55.850 "nguid": "8CA1D1118D3948E6992AE77187A2BD16", 00:14:55.850 "uuid": "8ca1d111-8d39-48e6-992a-e77187a2bd16" 00:14:55.850 }, 00:14:55.850 { 00:14:55.850 "nsid": 2, 00:14:55.850 "bdev_name": "Malloc3", 00:14:55.850 "name": "Malloc3", 00:14:55.850 "nguid": "87F847611F704B7EA5A62E47666934A0", 00:14:55.850 "uuid": "87f84761-1f70-4b7e-a5a6-2e47666934a0" 00:14:55.850 } 00:14:55.850 ] 00:14:55.850 }, 00:14:55.850 { 00:14:55.850 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:55.850 "subtype": "NVMe", 00:14:55.850 "listen_addresses": [ 00:14:55.850 { 00:14:55.850 "trtype": "VFIOUSER", 00:14:55.850 "adrfam": "IPv4", 00:14:55.850 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:55.850 "trsvcid": "0" 00:14:55.850 } 00:14:55.850 ], 00:14:55.850 "allow_any_host": true, 00:14:55.850 "hosts": [], 00:14:55.850 "serial_number": "SPDK2", 00:14:55.850 "model_number": "SPDK bdev Controller", 00:14:55.850 "max_namespaces": 32, 00:14:55.850 "min_cntlid": 1, 00:14:55.850 "max_cntlid": 65519, 00:14:55.850 "namespaces": [ 00:14:55.850 { 00:14:55.850 "nsid": 1, 00:14:55.850 "bdev_name": "Malloc2", 00:14:55.850 "name": "Malloc2", 00:14:55.850 "nguid": "9D5A883830044176B1E76C2ADE0CF78C", 00:14:55.850 "uuid": "9d5a8838-3004-4176-b1e7-6c2ade0cf78c" 00:14:55.850 } 00:14:55.850 ] 00:14:55.850 } 00:14:55.850 ] 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2685650 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:55.850 14:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:55.850 [2024-12-05 14:05:02.126807] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:56.110 Malloc4 00:14:56.110 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:56.110 [2024-12-05 14:05:02.321115] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:56.110 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:56.110 Asynchronous Event Request test 00:14:56.110 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.110 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:56.110 Registering asynchronous event callbacks... 00:14:56.110 Starting namespace attribute notice tests for all controllers... 00:14:56.110 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:56.110 aer_cb - Changed Namespace 00:14:56.110 Cleaning up... 00:14:56.371 [ 00:14:56.371 { 00:14:56.371 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:56.371 "subtype": "Discovery", 00:14:56.371 "listen_addresses": [], 00:14:56.371 "allow_any_host": true, 00:14:56.371 "hosts": [] 00:14:56.371 }, 00:14:56.371 { 00:14:56.371 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:56.371 "subtype": "NVMe", 00:14:56.371 "listen_addresses": [ 00:14:56.371 { 00:14:56.371 "trtype": "VFIOUSER", 00:14:56.371 "adrfam": "IPv4", 00:14:56.371 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:56.371 "trsvcid": "0" 00:14:56.371 } 00:14:56.371 ], 00:14:56.371 "allow_any_host": true, 00:14:56.371 "hosts": [], 00:14:56.371 "serial_number": "SPDK1", 00:14:56.371 "model_number": "SPDK bdev Controller", 00:14:56.371 "max_namespaces": 32, 00:14:56.371 "min_cntlid": 1, 00:14:56.371 "max_cntlid": 65519, 00:14:56.371 "namespaces": [ 00:14:56.371 { 00:14:56.371 "nsid": 1, 00:14:56.371 "bdev_name": "Malloc1", 00:14:56.371 "name": "Malloc1", 00:14:56.371 "nguid": "8CA1D1118D3948E6992AE77187A2BD16", 00:14:56.371 "uuid": "8ca1d111-8d39-48e6-992a-e77187a2bd16" 00:14:56.371 }, 00:14:56.371 { 00:14:56.371 "nsid": 2, 00:14:56.371 "bdev_name": "Malloc3", 00:14:56.371 "name": "Malloc3", 00:14:56.371 "nguid": "87F847611F704B7EA5A62E47666934A0", 00:14:56.371 "uuid": "87f84761-1f70-4b7e-a5a6-2e47666934a0" 00:14:56.371 } 00:14:56.371 ] 00:14:56.371 }, 00:14:56.371 { 00:14:56.371 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:56.371 "subtype": "NVMe", 00:14:56.371 "listen_addresses": [ 00:14:56.371 { 00:14:56.371 "trtype": "VFIOUSER", 00:14:56.371 "adrfam": "IPv4", 00:14:56.371 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:56.371 "trsvcid": "0" 00:14:56.371 } 00:14:56.371 ], 00:14:56.371 "allow_any_host": true, 00:14:56.371 "hosts": [], 00:14:56.371 "serial_number": "SPDK2", 00:14:56.371 "model_number": "SPDK bdev Controller", 00:14:56.371 "max_namespaces": 32, 00:14:56.371 "min_cntlid": 1, 00:14:56.371 "max_cntlid": 65519, 00:14:56.371 "namespaces": [ 00:14:56.371 { 00:14:56.371 "nsid": 1, 00:14:56.371 "bdev_name": "Malloc2", 00:14:56.371 "name": "Malloc2", 00:14:56.371 "nguid": "9D5A883830044176B1E76C2ADE0CF78C", 00:14:56.371 "uuid": "9d5a8838-3004-4176-b1e7-6c2ade0cf78c" 00:14:56.371 }, 00:14:56.371 { 00:14:56.371 "nsid": 2, 00:14:56.371 "bdev_name": "Malloc4", 00:14:56.371 "name": "Malloc4", 00:14:56.371 "nguid": "2760C25E6DB94AE8A1BF27B83AC4905D", 00:14:56.371 "uuid": "2760c25e-6db9-4ae8-a1bf-27b83ac4905d" 00:14:56.371 } 00:14:56.371 ] 00:14:56.371 } 00:14:56.371 ] 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2685650 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2676717 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2676717 ']' 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2676717 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2676717 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2676717' 00:14:56.371 killing process with pid 2676717 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2676717 00:14:56.371 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2676717 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2685824 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2685824' 00:14:56.632 Process pid: 2685824 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2685824 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2685824 ']' 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.632 14:05:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:56.632 [2024-12-05 14:05:02.803913] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:56.632 [2024-12-05 14:05:02.804848] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:14:56.632 [2024-12-05 14:05:02.804893] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.632 [2024-12-05 14:05:02.889300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.632 [2024-12-05 14:05:02.918632] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.632 [2024-12-05 14:05:02.918662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.632 [2024-12-05 14:05:02.918668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.632 [2024-12-05 14:05:02.918672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.632 [2024-12-05 14:05:02.918676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.632 [2024-12-05 14:05:02.919912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.632 [2024-12-05 14:05:02.920070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.632 [2024-12-05 14:05:02.920217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.632 [2024-12-05 14:05:02.920220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.893 [2024-12-05 14:05:02.971552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:56.893 [2024-12-05 14:05:02.972474] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:56.893 [2024-12-05 14:05:02.973448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:56.893 [2024-12-05 14:05:02.973796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:56.893 [2024-12-05 14:05:02.973842] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:57.465 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.465 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:57.465 14:05:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.406 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:58.667 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.667 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.667 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.667 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.667 14:05:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:58.928 Malloc1 00:14:58.928 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:58.928 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.188 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.447 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.447 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.447 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.707 Malloc2 00:14:59.707 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:59.707 14:05:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:59.968 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2685824 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2685824 ']' 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2685824 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2685824 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2685824' 00:15:00.228 killing process with pid 2685824 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2685824 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2685824 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:00.228 00:15:00.228 real 0m50.992s 00:15:00.228 user 3m15.448s 00:15:00.228 sys 0m2.708s 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.228 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:00.228 ************************************ 00:15:00.228 END TEST nvmf_vfio_user 00:15:00.228 ************************************ 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:00.489 ************************************ 00:15:00.489 START TEST nvmf_vfio_user_nvme_compliance 00:15:00.489 ************************************ 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:00.489 * Looking for test storage... 00:15:00.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.489 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.750 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.751 --rc genhtml_branch_coverage=1 00:15:00.751 --rc genhtml_function_coverage=1 00:15:00.751 --rc genhtml_legend=1 00:15:00.751 --rc geninfo_all_blocks=1 00:15:00.751 --rc geninfo_unexecuted_blocks=1 00:15:00.751 00:15:00.751 ' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.751 --rc genhtml_branch_coverage=1 00:15:00.751 --rc genhtml_function_coverage=1 00:15:00.751 --rc genhtml_legend=1 00:15:00.751 --rc geninfo_all_blocks=1 00:15:00.751 --rc geninfo_unexecuted_blocks=1 00:15:00.751 00:15:00.751 ' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.751 --rc genhtml_branch_coverage=1 00:15:00.751 --rc genhtml_function_coverage=1 00:15:00.751 --rc genhtml_legend=1 00:15:00.751 --rc geninfo_all_blocks=1 00:15:00.751 --rc geninfo_unexecuted_blocks=1 00:15:00.751 00:15:00.751 ' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.751 --rc genhtml_branch_coverage=1 00:15:00.751 --rc genhtml_function_coverage=1 00:15:00.751 --rc genhtml_legend=1 00:15:00.751 --rc geninfo_all_blocks=1 00:15:00.751 --rc geninfo_unexecuted_blocks=1 00:15:00.751 00:15:00.751 ' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:00.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2686585 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2686585' 00:15:00.751 Process pid: 2686585 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2686585 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2686585 ']' 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.751 14:05:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:00.751 [2024-12-05 14:05:06.895110] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:15:00.751 [2024-12-05 14:05:06.895183] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.751 [2024-12-05 14:05:06.982279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.751 [2024-12-05 14:05:07.018712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.751 [2024-12-05 14:05:07.018746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.751 [2024-12-05 14:05:07.018752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.752 [2024-12-05 14:05:07.018757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.752 [2024-12-05 14:05:07.018761] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.752 [2024-12-05 14:05:07.019995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.752 [2024-12-05 14:05:07.020143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.752 [2024-12-05 14:05:07.020145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.694 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.694 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:01.694 14:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:02.637 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.638 malloc0 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 14:05:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:02.638 00:15:02.638 00:15:02.638 CUnit - A unit testing framework for C - Version 2.1-3 00:15:02.638 http://cunit.sourceforge.net/ 00:15:02.638 00:15:02.638 00:15:02.638 Suite: nvme_compliance 00:15:02.899 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 14:05:08.941808] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.899 [2024-12-05 14:05:08.943110] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:02.899 [2024-12-05 14:05:08.943122] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:02.899 [2024-12-05 14:05:08.943127] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:02.899 [2024-12-05 14:05:08.944824] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.899 passed 00:15:02.899 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 14:05:09.022333] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.899 [2024-12-05 14:05:09.025356] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.899 passed 00:15:02.899 Test: admin_identify_ns ...[2024-12-05 14:05:09.101938] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.900 [2024-12-05 14:05:09.165466] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:02.900 [2024-12-05 14:05:09.173463] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:02.900 [2024-12-05 14:05:09.194550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.161 passed 00:15:03.161 Test: admin_get_features_mandatory_features ...[2024-12-05 14:05:09.266780] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.161 [2024-12-05 14:05:09.270806] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.161 passed 00:15:03.161 Test: admin_get_features_optional_features ...[2024-12-05 14:05:09.347287] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.161 [2024-12-05 14:05:09.350312] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.161 passed 00:15:03.161 Test: admin_set_features_number_of_queues ...[2024-12-05 14:05:09.426040] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.422 [2024-12-05 14:05:09.530537] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.422 passed 00:15:03.422 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 14:05:09.603725] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.422 [2024-12-05 14:05:09.606750] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.422 passed 00:15:03.422 Test: admin_get_log_page_with_lpo ...[2024-12-05 14:05:09.682530] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.684 [2024-12-05 14:05:09.752465] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:03.684 [2024-12-05 14:05:09.765506] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.684 passed 00:15:03.684 Test: fabric_property_get ...[2024-12-05 14:05:09.838724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.684 [2024-12-05 14:05:09.839922] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:03.684 [2024-12-05 14:05:09.841748] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.684 passed 00:15:03.684 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 14:05:09.921241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.684 [2024-12-05 14:05:09.922431] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:03.684 [2024-12-05 14:05:09.924255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.684 passed 00:15:03.944 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 14:05:09.996968] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.944 [2024-12-05 14:05:10.081465] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.944 [2024-12-05 14:05:10.097457] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:03.944 [2024-12-05 14:05:10.102550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.944 passed 00:15:03.944 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 14:05:10.175798] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:03.944 [2024-12-05 14:05:10.177006] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:03.944 [2024-12-05 14:05:10.178810] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:03.944 passed 00:15:04.204 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 14:05:10.254524] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.204 [2024-12-05 14:05:10.332458] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:04.204 [2024-12-05 14:05:10.356460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:04.204 [2024-12-05 14:05:10.361542] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.204 passed 00:15:04.204 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 14:05:10.434746] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.204 [2024-12-05 14:05:10.435946] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:04.204 [2024-12-05 14:05:10.435963] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:04.204 [2024-12-05 14:05:10.437768] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.204 passed 00:15:04.466 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 14:05:10.512814] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.466 [2024-12-05 14:05:10.608460] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:04.466 [2024-12-05 14:05:10.614462] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:04.466 [2024-12-05 14:05:10.623462] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:04.466 [2024-12-05 14:05:10.631459] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:04.466 [2024-12-05 14:05:10.660553] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.466 passed 00:15:04.466 Test: admin_create_io_sq_verify_pc ...[2024-12-05 14:05:10.734760] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:04.466 [2024-12-05 14:05:10.751466] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:04.726 [2024-12-05 14:05:10.768914] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:04.726 passed 00:15:04.726 Test: admin_create_io_qp_max_qps ...[2024-12-05 14:05:10.844392] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:05.665 [2024-12-05 14:05:11.957466] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:06.236 [2024-12-05 14:05:12.331755] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.236 passed 00:15:06.236 Test: admin_create_io_sq_shared_cq ...[2024-12-05 14:05:12.407554] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:06.495 [2024-12-05 14:05:12.541461] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:06.495 [2024-12-05 14:05:12.578503] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:06.495 passed 00:15:06.495 00:15:06.495 Run Summary: Type Total Ran Passed Failed Inactive 00:15:06.495 suites 1 1 n/a 0 0 00:15:06.495 tests 18 18 18 0 0 00:15:06.495 asserts 360 360 360 0 n/a 00:15:06.495 00:15:06.495 Elapsed time = 1.496 seconds 00:15:06.495 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2686585 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2686585 ']' 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2686585 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2686585 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2686585' 00:15:06.496 killing process with pid 2686585 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2686585 00:15:06.496 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2686585 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:06.755 00:15:06.755 real 0m6.205s 00:15:06.755 user 0m17.578s 00:15:06.755 sys 0m0.568s 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:06.755 ************************************ 00:15:06.755 END TEST nvmf_vfio_user_nvme_compliance 00:15:06.755 ************************************ 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:06.755 ************************************ 00:15:06.755 START TEST nvmf_vfio_user_fuzz 00:15:06.755 ************************************ 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:06.755 * Looking for test storage... 00:15:06.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:06.755 14:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.015 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:07.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.016 --rc genhtml_branch_coverage=1 00:15:07.016 --rc genhtml_function_coverage=1 00:15:07.016 --rc genhtml_legend=1 00:15:07.016 --rc geninfo_all_blocks=1 00:15:07.016 --rc geninfo_unexecuted_blocks=1 00:15:07.016 00:15:07.016 ' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:07.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.016 --rc genhtml_branch_coverage=1 00:15:07.016 --rc genhtml_function_coverage=1 00:15:07.016 --rc genhtml_legend=1 00:15:07.016 --rc geninfo_all_blocks=1 00:15:07.016 --rc geninfo_unexecuted_blocks=1 00:15:07.016 00:15:07.016 ' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:07.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.016 --rc genhtml_branch_coverage=1 00:15:07.016 --rc genhtml_function_coverage=1 00:15:07.016 --rc genhtml_legend=1 00:15:07.016 --rc geninfo_all_blocks=1 00:15:07.016 --rc geninfo_unexecuted_blocks=1 00:15:07.016 00:15:07.016 ' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:07.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.016 --rc genhtml_branch_coverage=1 00:15:07.016 --rc genhtml_function_coverage=1 00:15:07.016 --rc genhtml_legend=1 00:15:07.016 --rc geninfo_all_blocks=1 00:15:07.016 --rc geninfo_unexecuted_blocks=1 00:15:07.016 00:15:07.016 ' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:07.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2687980 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2687980' 00:15:07.016 Process pid: 2687980 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2687980 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2687980 ']' 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.016 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.017 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:07.957 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.957 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:07.957 14:05:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.898 14:05:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.898 malloc0 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:08.898 14:05:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:41.009 Fuzzing completed. Shutting down the fuzz application 00:15:41.009 00:15:41.009 Dumping successful admin opcodes: 00:15:41.009 9, 10, 00:15:41.009 Dumping successful io opcodes: 00:15:41.009 0, 00:15:41.009 NS: 0x20000081ef00 I/O qp, Total commands completed: 1437227, total successful commands: 5632, random_seed: 1039773440 00:15:41.009 NS: 0x20000081ef00 admin qp, Total commands completed: 358176, total successful commands: 94, random_seed: 1168290560 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2687980 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2687980 ']' 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2687980 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2687980 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2687980' 00:15:41.009 killing process with pid 2687980 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2687980 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2687980 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:41.009 00:15:41.009 real 0m32.794s 00:15:41.009 user 0m38.012s 00:15:41.009 sys 0m24.513s 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.009 ************************************ 00:15:41.009 END TEST nvmf_vfio_user_fuzz 00:15:41.009 ************************************ 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.009 ************************************ 00:15:41.009 START TEST nvmf_auth_target 00:15:41.009 ************************************ 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:41.009 * Looking for test storage... 00:15:41.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:41.009 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.010 --rc genhtml_branch_coverage=1 00:15:41.010 --rc genhtml_function_coverage=1 00:15:41.010 --rc genhtml_legend=1 00:15:41.010 --rc geninfo_all_blocks=1 00:15:41.010 --rc geninfo_unexecuted_blocks=1 00:15:41.010 00:15:41.010 ' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.010 --rc genhtml_branch_coverage=1 00:15:41.010 --rc genhtml_function_coverage=1 00:15:41.010 --rc genhtml_legend=1 00:15:41.010 --rc geninfo_all_blocks=1 00:15:41.010 --rc geninfo_unexecuted_blocks=1 00:15:41.010 00:15:41.010 ' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.010 --rc genhtml_branch_coverage=1 00:15:41.010 --rc genhtml_function_coverage=1 00:15:41.010 --rc genhtml_legend=1 00:15:41.010 --rc geninfo_all_blocks=1 00:15:41.010 --rc geninfo_unexecuted_blocks=1 00:15:41.010 00:15:41.010 ' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.010 --rc genhtml_branch_coverage=1 00:15:41.010 --rc genhtml_function_coverage=1 00:15:41.010 --rc genhtml_legend=1 00:15:41.010 --rc geninfo_all_blocks=1 00:15:41.010 --rc geninfo_unexecuted_blocks=1 00:15:41.010 00:15:41.010 ' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:41.010 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:41.011 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:41.011 14:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:47.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:47.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:47.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:47.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.602 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:47.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:15:47.603 00:15:47.603 --- 10.0.0.2 ping statistics --- 00:15:47.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.603 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:15:47.603 00:15:47.603 --- 10.0.0.1 ping statistics --- 00:15:47.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.603 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2697977 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2697977 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2697977 ']' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2697999 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=32e76016338f5f84223be3985ccf3c4847b74126ca1ee3a1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.eLZ 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 32e76016338f5f84223be3985ccf3c4847b74126ca1ee3a1 0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 32e76016338f5f84223be3985ccf3c4847b74126ca1ee3a1 0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=32e76016338f5f84223be3985ccf3c4847b74126ca1ee3a1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.eLZ 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.eLZ 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.eLZ 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9881ec3254003093ec0cac28aebdae8bfe5743215c8c96e9baae1134bf996ce1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.97R 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9881ec3254003093ec0cac28aebdae8bfe5743215c8c96e9baae1134bf996ce1 3 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9881ec3254003093ec0cac28aebdae8bfe5743215c8c96e9baae1134bf996ce1 3 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9881ec3254003093ec0cac28aebdae8bfe5743215c8c96e9baae1134bf996ce1 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:47.603 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.97R 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.97R 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.97R 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d6c8c1d453d9dbeedd164f962124a81c 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Kyn 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d6c8c1d453d9dbeedd164f962124a81c 1 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d6c8c1d453d9dbeedd164f962124a81c 1 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d6c8c1d453d9dbeedd164f962124a81c 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Kyn 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Kyn 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Kyn 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=432e6c0662cfd9874d10dad86bc72a14d58aa7d28a90dce5 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ssA 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 432e6c0662cfd9874d10dad86bc72a14d58aa7d28a90dce5 2 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 432e6c0662cfd9874d10dad86bc72a14d58aa7d28a90dce5 2 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=432e6c0662cfd9874d10dad86bc72a14d58aa7d28a90dce5 00:15:47.865 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:47.866 14:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ssA 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ssA 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.ssA 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=56601fa2fac2ad16e11296098e3edaa4fc8dcbfa06afd4d6 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.FTr 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 56601fa2fac2ad16e11296098e3edaa4fc8dcbfa06afd4d6 2 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 56601fa2fac2ad16e11296098e3edaa4fc8dcbfa06afd4d6 2 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=56601fa2fac2ad16e11296098e3edaa4fc8dcbfa06afd4d6 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.FTr 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.FTr 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.FTr 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b38ccc0bfaaf4fa15fe12f882b62a975 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6Sq 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b38ccc0bfaaf4fa15fe12f882b62a975 1 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b38ccc0bfaaf4fa15fe12f882b62a975 1 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b38ccc0bfaaf4fa15fe12f882b62a975 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:47.866 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6Sq 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6Sq 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.6Sq 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fa0c73eb5c3bdea4332ae5e111aaf8c9a3fdfa29825a52945bd77bab0b9a032 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Sd5 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fa0c73eb5c3bdea4332ae5e111aaf8c9a3fdfa29825a52945bd77bab0b9a032 3 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fa0c73eb5c3bdea4332ae5e111aaf8c9a3fdfa29825a52945bd77bab0b9a032 3 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fa0c73eb5c3bdea4332ae5e111aaf8c9a3fdfa29825a52945bd77bab0b9a032 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Sd5 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Sd5 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Sd5 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2697977 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2697977 ']' 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.126 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2697999 /var/tmp/host.sock 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2697999 ']' 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:48.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.387 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eLZ 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.eLZ 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.eLZ 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.97R ]] 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97R 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97R 00:15:48.649 14:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97R 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Kyn 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Kyn 00:15:48.909 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Kyn 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.ssA ]] 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ssA 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ssA 00:15:49.168 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ssA 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FTr 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.FTr 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.FTr 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.6Sq ]] 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Sq 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Sq 00:15:49.427 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Sq 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Sd5 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Sd5 00:15:49.686 14:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Sd5 00:15:49.946 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:49.946 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:49.946 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.946 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.946 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.946 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.206 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.466 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.466 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.727 { 00:15:50.727 "cntlid": 1, 00:15:50.727 "qid": 0, 00:15:50.727 "state": "enabled", 00:15:50.727 "thread": "nvmf_tgt_poll_group_000", 00:15:50.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:50.727 "listen_address": { 00:15:50.727 "trtype": "TCP", 00:15:50.727 "adrfam": "IPv4", 00:15:50.727 "traddr": "10.0.0.2", 00:15:50.727 "trsvcid": "4420" 00:15:50.727 }, 00:15:50.727 "peer_address": { 00:15:50.727 "trtype": "TCP", 00:15:50.727 "adrfam": "IPv4", 00:15:50.727 "traddr": "10.0.0.1", 00:15:50.727 "trsvcid": "45640" 00:15:50.727 }, 00:15:50.727 "auth": { 00:15:50.727 "state": "completed", 00:15:50.727 "digest": "sha256", 00:15:50.727 "dhgroup": "null" 00:15:50.727 } 00:15:50.727 } 00:15:50.727 ]' 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.727 14:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.988 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:15:50.988 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.587 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.848 14:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.848 00:15:52.108 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.109 { 00:15:52.109 "cntlid": 3, 00:15:52.109 "qid": 0, 00:15:52.109 "state": "enabled", 00:15:52.109 "thread": "nvmf_tgt_poll_group_000", 00:15:52.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:52.109 "listen_address": { 00:15:52.109 "trtype": "TCP", 00:15:52.109 "adrfam": "IPv4", 00:15:52.109 "traddr": "10.0.0.2", 00:15:52.109 "trsvcid": "4420" 00:15:52.109 }, 00:15:52.109 "peer_address": { 00:15:52.109 "trtype": "TCP", 00:15:52.109 "adrfam": "IPv4", 00:15:52.109 "traddr": "10.0.0.1", 00:15:52.109 "trsvcid": "38426" 00:15:52.109 }, 00:15:52.109 "auth": { 00:15:52.109 "state": "completed", 00:15:52.109 "digest": "sha256", 00:15:52.109 "dhgroup": "null" 00:15:52.109 } 00:15:52.109 } 00:15:52.109 ]' 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.109 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.370 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:52.370 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.370 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.370 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.370 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.631 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:15:52.631 14:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.202 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.462 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.462 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.463 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.463 00:15:53.463 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.463 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.463 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.723 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.723 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.723 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.723 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.723 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.723 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.723 { 00:15:53.723 "cntlid": 5, 00:15:53.723 "qid": 0, 00:15:53.723 "state": "enabled", 00:15:53.723 "thread": "nvmf_tgt_poll_group_000", 00:15:53.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:53.723 "listen_address": { 00:15:53.723 "trtype": "TCP", 00:15:53.723 "adrfam": "IPv4", 00:15:53.723 "traddr": "10.0.0.2", 00:15:53.723 "trsvcid": "4420" 00:15:53.723 }, 00:15:53.723 "peer_address": { 00:15:53.723 "trtype": "TCP", 00:15:53.723 "adrfam": "IPv4", 00:15:53.723 "traddr": "10.0.0.1", 00:15:53.723 "trsvcid": "38458" 00:15:53.723 }, 00:15:53.723 "auth": { 00:15:53.723 "state": "completed", 00:15:53.723 "digest": "sha256", 00:15:53.724 "dhgroup": "null" 00:15:53.724 } 00:15:53.724 } 00:15:53.724 ]' 00:15:53.724 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.724 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.724 14:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:15:53.983 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.584 14:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.872 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.133 00:15:55.133 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.133 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.133 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.395 { 00:15:55.395 "cntlid": 7, 00:15:55.395 "qid": 0, 00:15:55.395 "state": "enabled", 00:15:55.395 "thread": "nvmf_tgt_poll_group_000", 00:15:55.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:55.395 "listen_address": { 00:15:55.395 "trtype": "TCP", 00:15:55.395 "adrfam": "IPv4", 00:15:55.395 "traddr": "10.0.0.2", 00:15:55.395 "trsvcid": "4420" 00:15:55.395 }, 00:15:55.395 "peer_address": { 00:15:55.395 "trtype": "TCP", 00:15:55.395 "adrfam": "IPv4", 00:15:55.395 "traddr": "10.0.0.1", 00:15:55.395 "trsvcid": "38476" 00:15:55.395 }, 00:15:55.395 "auth": { 00:15:55.395 "state": "completed", 00:15:55.395 "digest": "sha256", 00:15:55.395 "dhgroup": "null" 00:15:55.395 } 00:15:55.395 } 00:15:55.395 ]' 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.395 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.656 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:15:55.656 14:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.226 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.486 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.747 00:15:56.747 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.747 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.747 14:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.747 { 00:15:56.747 "cntlid": 9, 00:15:56.747 "qid": 0, 00:15:56.747 "state": "enabled", 00:15:56.747 "thread": "nvmf_tgt_poll_group_000", 00:15:56.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:56.747 "listen_address": { 00:15:56.747 "trtype": "TCP", 00:15:56.747 "adrfam": "IPv4", 00:15:56.747 "traddr": "10.0.0.2", 00:15:56.747 "trsvcid": "4420" 00:15:56.747 }, 00:15:56.747 "peer_address": { 00:15:56.747 "trtype": "TCP", 00:15:56.747 "adrfam": "IPv4", 00:15:56.747 "traddr": "10.0.0.1", 00:15:56.747 "trsvcid": "38506" 00:15:56.747 }, 00:15:56.747 "auth": { 00:15:56.747 "state": "completed", 00:15:56.747 "digest": "sha256", 00:15:56.747 "dhgroup": "ffdhe2048" 00:15:56.747 } 00:15:56.747 } 00:15:56.747 ]' 00:15:56.747 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.007 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.267 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:15:57.267 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:15:57.839 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.839 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.839 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.839 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.839 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.839 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.840 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.840 14:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.840 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.100 00:15:58.100 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.100 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.100 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.361 { 00:15:58.361 "cntlid": 11, 00:15:58.361 "qid": 0, 00:15:58.361 "state": "enabled", 00:15:58.361 "thread": "nvmf_tgt_poll_group_000", 00:15:58.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:58.361 "listen_address": { 00:15:58.361 "trtype": "TCP", 00:15:58.361 "adrfam": "IPv4", 00:15:58.361 "traddr": "10.0.0.2", 00:15:58.361 "trsvcid": "4420" 00:15:58.361 }, 00:15:58.361 "peer_address": { 00:15:58.361 "trtype": "TCP", 00:15:58.361 "adrfam": "IPv4", 00:15:58.361 "traddr": "10.0.0.1", 00:15:58.361 "trsvcid": "38520" 00:15:58.361 }, 00:15:58.361 "auth": { 00:15:58.361 "state": "completed", 00:15:58.361 "digest": "sha256", 00:15:58.361 "dhgroup": "ffdhe2048" 00:15:58.361 } 00:15:58.361 } 00:15:58.361 ]' 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.361 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.622 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.622 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.622 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.622 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:15:58.622 14:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.192 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.451 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.452 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.711 00:15:59.711 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.711 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.711 14:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.970 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.971 { 00:15:59.971 "cntlid": 13, 00:15:59.971 "qid": 0, 00:15:59.971 "state": "enabled", 00:15:59.971 "thread": "nvmf_tgt_poll_group_000", 00:15:59.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:15:59.971 "listen_address": { 00:15:59.971 "trtype": "TCP", 00:15:59.971 "adrfam": "IPv4", 00:15:59.971 "traddr": "10.0.0.2", 00:15:59.971 "trsvcid": "4420" 00:15:59.971 }, 00:15:59.971 "peer_address": { 00:15:59.971 "trtype": "TCP", 00:15:59.971 "adrfam": "IPv4", 00:15:59.971 "traddr": "10.0.0.1", 00:15:59.971 "trsvcid": "38552" 00:15:59.971 }, 00:15:59.971 "auth": { 00:15:59.971 "state": "completed", 00:15:59.971 "digest": "sha256", 00:15:59.971 "dhgroup": "ffdhe2048" 00:15:59.971 } 00:15:59.971 } 00:15:59.971 ]' 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.971 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.231 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:00.231 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:00.801 14:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.801 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.062 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.323 00:16:01.323 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.323 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.323 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.583 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.583 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.583 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.583 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.583 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.583 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.583 { 00:16:01.583 "cntlid": 15, 00:16:01.583 "qid": 0, 00:16:01.583 "state": "enabled", 00:16:01.583 "thread": "nvmf_tgt_poll_group_000", 00:16:01.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:01.584 "listen_address": { 00:16:01.584 "trtype": "TCP", 00:16:01.584 "adrfam": "IPv4", 00:16:01.584 "traddr": "10.0.0.2", 00:16:01.584 "trsvcid": "4420" 00:16:01.584 }, 00:16:01.584 "peer_address": { 00:16:01.584 "trtype": "TCP", 00:16:01.584 "adrfam": "IPv4", 00:16:01.584 "traddr": "10.0.0.1", 00:16:01.584 "trsvcid": "36460" 00:16:01.584 }, 00:16:01.584 "auth": { 00:16:01.584 "state": "completed", 00:16:01.584 "digest": "sha256", 00:16:01.584 "dhgroup": "ffdhe2048" 00:16:01.584 } 00:16:01.584 } 00:16:01.584 ]' 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.584 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.844 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:01.844 14:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.416 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.676 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.677 14:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.937 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.937 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.198 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.198 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.198 { 00:16:03.198 "cntlid": 17, 00:16:03.198 "qid": 0, 00:16:03.198 "state": "enabled", 00:16:03.198 "thread": "nvmf_tgt_poll_group_000", 00:16:03.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:03.198 "listen_address": { 00:16:03.198 "trtype": "TCP", 00:16:03.198 "adrfam": "IPv4", 00:16:03.198 "traddr": "10.0.0.2", 00:16:03.198 "trsvcid": "4420" 00:16:03.198 }, 00:16:03.198 "peer_address": { 00:16:03.198 "trtype": "TCP", 00:16:03.198 "adrfam": "IPv4", 00:16:03.198 "traddr": "10.0.0.1", 00:16:03.198 "trsvcid": "36472" 00:16:03.198 }, 00:16:03.198 "auth": { 00:16:03.198 "state": "completed", 00:16:03.198 "digest": "sha256", 00:16:03.198 "dhgroup": "ffdhe3072" 00:16:03.198 } 00:16:03.198 } 00:16:03.198 ]' 00:16:03.198 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.199 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.460 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:03.460 14:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.030 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.291 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.291 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.552 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.552 { 00:16:04.552 "cntlid": 19, 00:16:04.552 "qid": 0, 00:16:04.552 "state": "enabled", 00:16:04.552 "thread": "nvmf_tgt_poll_group_000", 00:16:04.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:04.552 "listen_address": { 00:16:04.552 "trtype": "TCP", 00:16:04.552 "adrfam": "IPv4", 00:16:04.552 "traddr": "10.0.0.2", 00:16:04.552 "trsvcid": "4420" 00:16:04.553 }, 00:16:04.553 "peer_address": { 00:16:04.553 "trtype": "TCP", 00:16:04.553 "adrfam": "IPv4", 00:16:04.553 "traddr": "10.0.0.1", 00:16:04.553 "trsvcid": "36494" 00:16:04.553 }, 00:16:04.553 "auth": { 00:16:04.553 "state": "completed", 00:16:04.553 "digest": "sha256", 00:16:04.553 "dhgroup": "ffdhe3072" 00:16:04.553 } 00:16:04.553 } 00:16:04.553 ]' 00:16:04.553 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.813 14:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.073 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:05.073 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.644 14:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.904 00:16:05.904 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.904 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.904 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.165 { 00:16:06.165 "cntlid": 21, 00:16:06.165 "qid": 0, 00:16:06.165 "state": "enabled", 00:16:06.165 "thread": "nvmf_tgt_poll_group_000", 00:16:06.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:06.165 "listen_address": { 00:16:06.165 "trtype": "TCP", 00:16:06.165 "adrfam": "IPv4", 00:16:06.165 "traddr": "10.0.0.2", 00:16:06.165 "trsvcid": "4420" 00:16:06.165 }, 00:16:06.165 "peer_address": { 00:16:06.165 "trtype": "TCP", 00:16:06.165 "adrfam": "IPv4", 00:16:06.165 "traddr": "10.0.0.1", 00:16:06.165 "trsvcid": "36512" 00:16:06.165 }, 00:16:06.165 "auth": { 00:16:06.165 "state": "completed", 00:16:06.165 "digest": "sha256", 00:16:06.165 "dhgroup": "ffdhe3072" 00:16:06.165 } 00:16:06.165 } 00:16:06.165 ]' 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.165 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.426 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.426 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.426 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.426 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.426 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.687 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:06.687 14:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:07.260 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.261 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.522 00:16:07.522 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.522 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.522 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.784 { 00:16:07.784 "cntlid": 23, 00:16:07.784 "qid": 0, 00:16:07.784 "state": "enabled", 00:16:07.784 "thread": "nvmf_tgt_poll_group_000", 00:16:07.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:07.784 "listen_address": { 00:16:07.784 "trtype": "TCP", 00:16:07.784 "adrfam": "IPv4", 00:16:07.784 "traddr": "10.0.0.2", 00:16:07.784 "trsvcid": "4420" 00:16:07.784 }, 00:16:07.784 "peer_address": { 00:16:07.784 "trtype": "TCP", 00:16:07.784 "adrfam": "IPv4", 00:16:07.784 "traddr": "10.0.0.1", 00:16:07.784 "trsvcid": "36536" 00:16:07.784 }, 00:16:07.784 "auth": { 00:16:07.784 "state": "completed", 00:16:07.784 "digest": "sha256", 00:16:07.784 "dhgroup": "ffdhe3072" 00:16:07.784 } 00:16:07.784 } 00:16:07.784 ]' 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.784 14:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.784 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.784 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.784 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.044 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.044 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.044 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:08.044 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.612 14:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.879 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:08.879 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.879 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:08.879 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.879 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.880 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.139 00:16:09.139 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.139 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.139 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.399 { 00:16:09.399 "cntlid": 25, 00:16:09.399 "qid": 0, 00:16:09.399 "state": "enabled", 00:16:09.399 "thread": "nvmf_tgt_poll_group_000", 00:16:09.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:09.399 "listen_address": { 00:16:09.399 "trtype": "TCP", 00:16:09.399 "adrfam": "IPv4", 00:16:09.399 "traddr": "10.0.0.2", 00:16:09.399 "trsvcid": "4420" 00:16:09.399 }, 00:16:09.399 "peer_address": { 00:16:09.399 "trtype": "TCP", 00:16:09.399 "adrfam": "IPv4", 00:16:09.399 "traddr": "10.0.0.1", 00:16:09.399 "trsvcid": "36546" 00:16:09.399 }, 00:16:09.399 "auth": { 00:16:09.399 "state": "completed", 00:16:09.399 "digest": "sha256", 00:16:09.399 "dhgroup": "ffdhe4096" 00:16:09.399 } 00:16:09.399 } 00:16:09.399 ]' 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.399 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.659 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:09.659 14:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.282 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.541 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.542 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.542 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.800 00:16:10.800 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.800 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.800 14:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.800 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.800 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.800 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.800 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.059 { 00:16:11.059 "cntlid": 27, 00:16:11.059 "qid": 0, 00:16:11.059 "state": "enabled", 00:16:11.059 "thread": "nvmf_tgt_poll_group_000", 00:16:11.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:11.059 "listen_address": { 00:16:11.059 "trtype": "TCP", 00:16:11.059 "adrfam": "IPv4", 00:16:11.059 "traddr": "10.0.0.2", 00:16:11.059 "trsvcid": "4420" 00:16:11.059 }, 00:16:11.059 "peer_address": { 00:16:11.059 "trtype": "TCP", 00:16:11.059 "adrfam": "IPv4", 00:16:11.059 "traddr": "10.0.0.1", 00:16:11.059 "trsvcid": "40438" 00:16:11.059 }, 00:16:11.059 "auth": { 00:16:11.059 "state": "completed", 00:16:11.059 "digest": "sha256", 00:16:11.059 "dhgroup": "ffdhe4096" 00:16:11.059 } 00:16:11.059 } 00:16:11.059 ]' 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.059 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.319 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:11.319 14:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.887 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.147 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.407 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.407 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.407 { 00:16:12.407 "cntlid": 29, 00:16:12.407 "qid": 0, 00:16:12.407 "state": "enabled", 00:16:12.407 "thread": "nvmf_tgt_poll_group_000", 00:16:12.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:12.407 "listen_address": { 00:16:12.407 "trtype": "TCP", 00:16:12.407 "adrfam": "IPv4", 00:16:12.407 "traddr": "10.0.0.2", 00:16:12.408 "trsvcid": "4420" 00:16:12.408 }, 00:16:12.408 "peer_address": { 00:16:12.408 "trtype": "TCP", 00:16:12.408 "adrfam": "IPv4", 00:16:12.408 "traddr": "10.0.0.1", 00:16:12.408 "trsvcid": "40470" 00:16:12.408 }, 00:16:12.408 "auth": { 00:16:12.408 "state": "completed", 00:16:12.408 "digest": "sha256", 00:16:12.408 "dhgroup": "ffdhe4096" 00:16:12.408 } 00:16:12.408 } 00:16:12.408 ]' 00:16:12.408 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.668 14:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.927 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:12.927 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.498 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:13.759 14:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.021 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.021 { 00:16:14.021 "cntlid": 31, 00:16:14.021 "qid": 0, 00:16:14.021 "state": "enabled", 00:16:14.021 "thread": "nvmf_tgt_poll_group_000", 00:16:14.021 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:14.021 "listen_address": { 00:16:14.021 "trtype": "TCP", 00:16:14.021 "adrfam": "IPv4", 00:16:14.021 "traddr": "10.0.0.2", 00:16:14.021 "trsvcid": "4420" 00:16:14.021 }, 00:16:14.021 "peer_address": { 00:16:14.021 "trtype": "TCP", 00:16:14.021 "adrfam": "IPv4", 00:16:14.021 "traddr": "10.0.0.1", 00:16:14.021 "trsvcid": "40512" 00:16:14.021 }, 00:16:14.021 "auth": { 00:16:14.021 "state": "completed", 00:16:14.021 "digest": "sha256", 00:16:14.021 "dhgroup": "ffdhe4096" 00:16:14.021 } 00:16:14.021 } 00:16:14.021 ]' 00:16:14.021 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.282 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.542 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:14.542 14:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:15.112 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.113 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.372 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.632 00:16:15.632 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.632 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.632 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.892 { 00:16:15.892 "cntlid": 33, 00:16:15.892 "qid": 0, 00:16:15.892 "state": "enabled", 00:16:15.892 "thread": "nvmf_tgt_poll_group_000", 00:16:15.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:15.892 "listen_address": { 00:16:15.892 "trtype": "TCP", 00:16:15.892 "adrfam": "IPv4", 00:16:15.892 "traddr": "10.0.0.2", 00:16:15.892 "trsvcid": "4420" 00:16:15.892 }, 00:16:15.892 "peer_address": { 00:16:15.892 "trtype": "TCP", 00:16:15.892 "adrfam": "IPv4", 00:16:15.892 "traddr": "10.0.0.1", 00:16:15.892 "trsvcid": "40552" 00:16:15.892 }, 00:16:15.892 "auth": { 00:16:15.892 "state": "completed", 00:16:15.892 "digest": "sha256", 00:16:15.892 "dhgroup": "ffdhe6144" 00:16:15.892 } 00:16:15.892 } 00:16:15.892 ]' 00:16:15.892 14:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.892 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.153 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:16.153 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.724 14:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.984 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.985 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.985 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.985 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.985 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.985 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.245 00:16:17.245 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.245 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.245 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.507 { 00:16:17.507 "cntlid": 35, 00:16:17.507 "qid": 0, 00:16:17.507 "state": "enabled", 00:16:17.507 "thread": "nvmf_tgt_poll_group_000", 00:16:17.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:17.507 "listen_address": { 00:16:17.507 "trtype": "TCP", 00:16:17.507 "adrfam": "IPv4", 00:16:17.507 "traddr": "10.0.0.2", 00:16:17.507 "trsvcid": "4420" 00:16:17.507 }, 00:16:17.507 "peer_address": { 00:16:17.507 "trtype": "TCP", 00:16:17.507 "adrfam": "IPv4", 00:16:17.507 "traddr": "10.0.0.1", 00:16:17.507 "trsvcid": "40572" 00:16:17.507 }, 00:16:17.507 "auth": { 00:16:17.507 "state": "completed", 00:16:17.507 "digest": "sha256", 00:16:17.507 "dhgroup": "ffdhe6144" 00:16:17.507 } 00:16:17.507 } 00:16:17.507 ]' 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.507 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.768 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:17.768 14:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:18.340 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.340 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.340 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.340 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.340 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.340 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.341 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.341 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.603 14:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.864 00:16:18.864 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.864 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.864 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.125 { 00:16:19.125 "cntlid": 37, 00:16:19.125 "qid": 0, 00:16:19.125 "state": "enabled", 00:16:19.125 "thread": "nvmf_tgt_poll_group_000", 00:16:19.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:19.125 "listen_address": { 00:16:19.125 "trtype": "TCP", 00:16:19.125 "adrfam": "IPv4", 00:16:19.125 "traddr": "10.0.0.2", 00:16:19.125 "trsvcid": "4420" 00:16:19.125 }, 00:16:19.125 "peer_address": { 00:16:19.125 "trtype": "TCP", 00:16:19.125 "adrfam": "IPv4", 00:16:19.125 "traddr": "10.0.0.1", 00:16:19.125 "trsvcid": "40590" 00:16:19.125 }, 00:16:19.125 "auth": { 00:16:19.125 "state": "completed", 00:16:19.125 "digest": "sha256", 00:16:19.125 "dhgroup": "ffdhe6144" 00:16:19.125 } 00:16:19.125 } 00:16:19.125 ]' 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.125 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.386 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.386 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.386 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.386 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.386 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:19.386 14:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.958 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.219 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.479 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.739 { 00:16:20.739 "cntlid": 39, 00:16:20.739 "qid": 0, 00:16:20.739 "state": "enabled", 00:16:20.739 "thread": "nvmf_tgt_poll_group_000", 00:16:20.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:20.739 "listen_address": { 00:16:20.739 "trtype": "TCP", 00:16:20.739 "adrfam": "IPv4", 00:16:20.739 "traddr": "10.0.0.2", 00:16:20.739 "trsvcid": "4420" 00:16:20.739 }, 00:16:20.739 "peer_address": { 00:16:20.739 "trtype": "TCP", 00:16:20.739 "adrfam": "IPv4", 00:16:20.739 "traddr": "10.0.0.1", 00:16:20.739 "trsvcid": "40630" 00:16:20.739 }, 00:16:20.739 "auth": { 00:16:20.739 "state": "completed", 00:16:20.739 "digest": "sha256", 00:16:20.739 "dhgroup": "ffdhe6144" 00:16:20.739 } 00:16:20.739 } 00:16:20.739 ]' 00:16:20.739 14:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.998 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.258 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:21.258 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.826 14:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.826 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.085 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.085 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.085 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.085 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.384 00:16:22.384 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.384 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.384 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.700 { 00:16:22.700 "cntlid": 41, 00:16:22.700 "qid": 0, 00:16:22.700 "state": "enabled", 00:16:22.700 "thread": "nvmf_tgt_poll_group_000", 00:16:22.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:22.700 "listen_address": { 00:16:22.700 "trtype": "TCP", 00:16:22.700 "adrfam": "IPv4", 00:16:22.700 "traddr": "10.0.0.2", 00:16:22.700 "trsvcid": "4420" 00:16:22.700 }, 00:16:22.700 "peer_address": { 00:16:22.700 "trtype": "TCP", 00:16:22.700 "adrfam": "IPv4", 00:16:22.700 "traddr": "10.0.0.1", 00:16:22.700 "trsvcid": "58624" 00:16:22.700 }, 00:16:22.700 "auth": { 00:16:22.700 "state": "completed", 00:16:22.700 "digest": "sha256", 00:16:22.700 "dhgroup": "ffdhe8192" 00:16:22.700 } 00:16:22.700 } 00:16:22.700 ]' 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.700 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.701 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.701 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.701 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.701 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.701 14:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.961 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:22.961 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.532 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:23.791 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:23.791 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.792 14:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.361 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.361 { 00:16:24.361 "cntlid": 43, 00:16:24.361 "qid": 0, 00:16:24.361 "state": "enabled", 00:16:24.361 "thread": "nvmf_tgt_poll_group_000", 00:16:24.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:24.361 "listen_address": { 00:16:24.361 "trtype": "TCP", 00:16:24.361 "adrfam": "IPv4", 00:16:24.361 "traddr": "10.0.0.2", 00:16:24.361 "trsvcid": "4420" 00:16:24.361 }, 00:16:24.361 "peer_address": { 00:16:24.361 "trtype": "TCP", 00:16:24.361 "adrfam": "IPv4", 00:16:24.361 "traddr": "10.0.0.1", 00:16:24.361 "trsvcid": "58652" 00:16:24.361 }, 00:16:24.361 "auth": { 00:16:24.361 "state": "completed", 00:16:24.361 "digest": "sha256", 00:16:24.361 "dhgroup": "ffdhe8192" 00:16:24.361 } 00:16:24.361 } 00:16:24.361 ]' 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.361 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:24.622 14:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.563 14:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.133 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.133 { 00:16:26.133 "cntlid": 45, 00:16:26.133 "qid": 0, 00:16:26.133 "state": "enabled", 00:16:26.133 "thread": "nvmf_tgt_poll_group_000", 00:16:26.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:26.133 "listen_address": { 00:16:26.133 "trtype": "TCP", 00:16:26.133 "adrfam": "IPv4", 00:16:26.133 "traddr": "10.0.0.2", 00:16:26.133 "trsvcid": "4420" 00:16:26.133 }, 00:16:26.133 "peer_address": { 00:16:26.133 "trtype": "TCP", 00:16:26.133 "adrfam": "IPv4", 00:16:26.133 "traddr": "10.0.0.1", 00:16:26.133 "trsvcid": "58676" 00:16:26.133 }, 00:16:26.133 "auth": { 00:16:26.133 "state": "completed", 00:16:26.133 "digest": "sha256", 00:16:26.133 "dhgroup": "ffdhe8192" 00:16:26.133 } 00:16:26.133 } 00:16:26.133 ]' 00:16:26.133 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.393 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.654 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:26.654 14:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.225 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.795 00:16:27.795 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.795 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.795 14:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.054 { 00:16:28.054 "cntlid": 47, 00:16:28.054 "qid": 0, 00:16:28.054 "state": "enabled", 00:16:28.054 "thread": "nvmf_tgt_poll_group_000", 00:16:28.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:28.054 "listen_address": { 00:16:28.054 "trtype": "TCP", 00:16:28.054 "adrfam": "IPv4", 00:16:28.054 "traddr": "10.0.0.2", 00:16:28.054 "trsvcid": "4420" 00:16:28.054 }, 00:16:28.054 "peer_address": { 00:16:28.054 "trtype": "TCP", 00:16:28.054 "adrfam": "IPv4", 00:16:28.054 "traddr": "10.0.0.1", 00:16:28.054 "trsvcid": "58700" 00:16:28.054 }, 00:16:28.054 "auth": { 00:16:28.054 "state": "completed", 00:16:28.054 "digest": "sha256", 00:16:28.054 "dhgroup": "ffdhe8192" 00:16:28.054 } 00:16:28.054 } 00:16:28.054 ]' 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.054 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.314 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:28.314 14:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.886 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.146 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.406 00:16:29.406 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.406 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.406 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.665 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.665 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.665 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.665 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.665 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.666 { 00:16:29.666 "cntlid": 49, 00:16:29.666 "qid": 0, 00:16:29.666 "state": "enabled", 00:16:29.666 "thread": "nvmf_tgt_poll_group_000", 00:16:29.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:29.666 "listen_address": { 00:16:29.666 "trtype": "TCP", 00:16:29.666 "adrfam": "IPv4", 00:16:29.666 "traddr": "10.0.0.2", 00:16:29.666 "trsvcid": "4420" 00:16:29.666 }, 00:16:29.666 "peer_address": { 00:16:29.666 "trtype": "TCP", 00:16:29.666 "adrfam": "IPv4", 00:16:29.666 "traddr": "10.0.0.1", 00:16:29.666 "trsvcid": "58734" 00:16:29.666 }, 00:16:29.666 "auth": { 00:16:29.666 "state": "completed", 00:16:29.666 "digest": "sha384", 00:16:29.666 "dhgroup": "null" 00:16:29.666 } 00:16:29.666 } 00:16:29.666 ]' 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.666 14:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.926 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:29.926 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.497 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.756 14:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.756 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.017 { 00:16:31.017 "cntlid": 51, 00:16:31.017 "qid": 0, 00:16:31.017 "state": "enabled", 00:16:31.017 "thread": "nvmf_tgt_poll_group_000", 00:16:31.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:31.017 "listen_address": { 00:16:31.017 "trtype": "TCP", 00:16:31.017 "adrfam": "IPv4", 00:16:31.017 "traddr": "10.0.0.2", 00:16:31.017 "trsvcid": "4420" 00:16:31.017 }, 00:16:31.017 "peer_address": { 00:16:31.017 "trtype": "TCP", 00:16:31.017 "adrfam": "IPv4", 00:16:31.017 "traddr": "10.0.0.1", 00:16:31.017 "trsvcid": "38060" 00:16:31.017 }, 00:16:31.017 "auth": { 00:16:31.017 "state": "completed", 00:16:31.017 "digest": "sha384", 00:16:31.017 "dhgroup": "null" 00:16:31.017 } 00:16:31.017 } 00:16:31.017 ]' 00:16:31.017 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.276 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.535 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:31.535 14:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.103 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.104 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.363 00:16:32.363 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.363 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.364 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.623 { 00:16:32.623 "cntlid": 53, 00:16:32.623 "qid": 0, 00:16:32.623 "state": "enabled", 00:16:32.623 "thread": "nvmf_tgt_poll_group_000", 00:16:32.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.623 "listen_address": { 00:16:32.623 "trtype": "TCP", 00:16:32.623 "adrfam": "IPv4", 00:16:32.623 "traddr": "10.0.0.2", 00:16:32.623 "trsvcid": "4420" 00:16:32.623 }, 00:16:32.623 "peer_address": { 00:16:32.623 "trtype": "TCP", 00:16:32.623 "adrfam": "IPv4", 00:16:32.623 "traddr": "10.0.0.1", 00:16:32.623 "trsvcid": "38098" 00:16:32.623 }, 00:16:32.623 "auth": { 00:16:32.623 "state": "completed", 00:16:32.623 "digest": "sha384", 00:16:32.623 "dhgroup": "null" 00:16:32.623 } 00:16:32.623 } 00:16:32.623 ]' 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:32.623 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.624 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.624 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.884 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.884 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.884 14:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.884 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:32.884 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.451 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.710 14:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.969 00:16:33.969 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.969 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.969 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.228 { 00:16:34.228 "cntlid": 55, 00:16:34.228 "qid": 0, 00:16:34.228 "state": "enabled", 00:16:34.228 "thread": "nvmf_tgt_poll_group_000", 00:16:34.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.228 "listen_address": { 00:16:34.228 "trtype": "TCP", 00:16:34.228 "adrfam": "IPv4", 00:16:34.228 "traddr": "10.0.0.2", 00:16:34.228 "trsvcid": "4420" 00:16:34.228 }, 00:16:34.228 "peer_address": { 00:16:34.228 "trtype": "TCP", 00:16:34.228 "adrfam": "IPv4", 00:16:34.228 "traddr": "10.0.0.1", 00:16:34.228 "trsvcid": "38124" 00:16:34.228 }, 00:16:34.228 "auth": { 00:16:34.228 "state": "completed", 00:16:34.228 "digest": "sha384", 00:16:34.228 "dhgroup": "null" 00:16:34.228 } 00:16:34.228 } 00:16:34.228 ]' 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.228 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.489 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:34.489 14:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.060 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.319 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.578 00:16:35.578 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.578 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.578 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.836 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.836 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.837 { 00:16:35.837 "cntlid": 57, 00:16:35.837 "qid": 0, 00:16:35.837 "state": "enabled", 00:16:35.837 "thread": "nvmf_tgt_poll_group_000", 00:16:35.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:35.837 "listen_address": { 00:16:35.837 "trtype": "TCP", 00:16:35.837 "adrfam": "IPv4", 00:16:35.837 "traddr": "10.0.0.2", 00:16:35.837 "trsvcid": "4420" 00:16:35.837 }, 00:16:35.837 "peer_address": { 00:16:35.837 "trtype": "TCP", 00:16:35.837 "adrfam": "IPv4", 00:16:35.837 "traddr": "10.0.0.1", 00:16:35.837 "trsvcid": "38158" 00:16:35.837 }, 00:16:35.837 "auth": { 00:16:35.837 "state": "completed", 00:16:35.837 "digest": "sha384", 00:16:35.837 "dhgroup": "ffdhe2048" 00:16:35.837 } 00:16:35.837 } 00:16:35.837 ]' 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.837 14:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.837 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.837 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.837 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.096 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:36.096 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.666 14:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.927 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.187 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.187 { 00:16:37.187 "cntlid": 59, 00:16:37.187 "qid": 0, 00:16:37.187 "state": "enabled", 00:16:37.187 "thread": "nvmf_tgt_poll_group_000", 00:16:37.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.187 "listen_address": { 00:16:37.187 "trtype": "TCP", 00:16:37.187 "adrfam": "IPv4", 00:16:37.187 "traddr": "10.0.0.2", 00:16:37.187 "trsvcid": "4420" 00:16:37.187 }, 00:16:37.187 "peer_address": { 00:16:37.187 "trtype": "TCP", 00:16:37.187 "adrfam": "IPv4", 00:16:37.187 "traddr": "10.0.0.1", 00:16:37.187 "trsvcid": "38184" 00:16:37.187 }, 00:16:37.187 "auth": { 00:16:37.187 "state": "completed", 00:16:37.187 "digest": "sha384", 00:16:37.187 "dhgroup": "ffdhe2048" 00:16:37.187 } 00:16:37.187 } 00:16:37.187 ]' 00:16:37.187 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.447 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.706 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:37.706 14:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.277 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.538 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.538 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.798 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.798 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.798 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.798 14:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.798 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.798 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.798 { 00:16:38.798 "cntlid": 61, 00:16:38.798 "qid": 0, 00:16:38.798 "state": "enabled", 00:16:38.798 "thread": "nvmf_tgt_poll_group_000", 00:16:38.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:38.798 "listen_address": { 00:16:38.798 "trtype": "TCP", 00:16:38.798 "adrfam": "IPv4", 00:16:38.798 "traddr": "10.0.0.2", 00:16:38.798 "trsvcid": "4420" 00:16:38.798 }, 00:16:38.798 "peer_address": { 00:16:38.798 "trtype": "TCP", 00:16:38.798 "adrfam": "IPv4", 00:16:38.798 "traddr": "10.0.0.1", 00:16:38.798 "trsvcid": "38198" 00:16:38.798 }, 00:16:38.798 "auth": { 00:16:38.798 "state": "completed", 00:16:38.798 "digest": "sha384", 00:16:38.798 "dhgroup": "ffdhe2048" 00:16:38.798 } 00:16:38.799 } 00:16:38.799 ]' 00:16:38.799 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.799 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.799 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:39.059 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.998 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.998 14:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.998 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.999 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:39.999 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.999 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.258 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.258 { 00:16:40.258 "cntlid": 63, 00:16:40.258 "qid": 0, 00:16:40.258 "state": "enabled", 00:16:40.258 "thread": "nvmf_tgt_poll_group_000", 00:16:40.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:40.258 "listen_address": { 00:16:40.258 "trtype": "TCP", 00:16:40.258 "adrfam": "IPv4", 00:16:40.258 "traddr": "10.0.0.2", 00:16:40.258 "trsvcid": "4420" 00:16:40.258 }, 00:16:40.258 "peer_address": { 00:16:40.258 "trtype": "TCP", 00:16:40.258 "adrfam": "IPv4", 00:16:40.258 "traddr": "10.0.0.1", 00:16:40.258 "trsvcid": "38216" 00:16:40.258 }, 00:16:40.258 "auth": { 00:16:40.258 "state": "completed", 00:16:40.258 "digest": "sha384", 00:16:40.258 "dhgroup": "ffdhe2048" 00:16:40.258 } 00:16:40.258 } 00:16:40.258 ]' 00:16:40.258 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.518 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.518 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.518 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.518 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.518 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.519 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.519 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.779 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:40.779 14:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.350 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.610 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.611 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.611 00:16:41.870 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.870 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.870 14:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.870 { 00:16:41.870 "cntlid": 65, 00:16:41.870 "qid": 0, 00:16:41.870 "state": "enabled", 00:16:41.870 "thread": "nvmf_tgt_poll_group_000", 00:16:41.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.870 "listen_address": { 00:16:41.870 "trtype": "TCP", 00:16:41.870 "adrfam": "IPv4", 00:16:41.870 "traddr": "10.0.0.2", 00:16:41.870 "trsvcid": "4420" 00:16:41.870 }, 00:16:41.870 "peer_address": { 00:16:41.870 "trtype": "TCP", 00:16:41.870 "adrfam": "IPv4", 00:16:41.870 "traddr": "10.0.0.1", 00:16:41.870 "trsvcid": "52442" 00:16:41.870 }, 00:16:41.870 "auth": { 00:16:41.870 "state": "completed", 00:16:41.870 "digest": "sha384", 00:16:41.870 "dhgroup": "ffdhe3072" 00:16:41.870 } 00:16:41.870 } 00:16:41.870 ]' 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.870 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.129 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.129 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.129 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.129 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.129 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.129 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.388 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:42.388 14:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.809 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.068 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.069 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.329 00:16:43.329 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.329 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.329 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.588 { 00:16:43.588 "cntlid": 67, 00:16:43.588 "qid": 0, 00:16:43.588 "state": "enabled", 00:16:43.588 "thread": "nvmf_tgt_poll_group_000", 00:16:43.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:43.588 "listen_address": { 00:16:43.588 "trtype": "TCP", 00:16:43.588 "adrfam": "IPv4", 00:16:43.588 "traddr": "10.0.0.2", 00:16:43.588 "trsvcid": "4420" 00:16:43.588 }, 00:16:43.588 "peer_address": { 00:16:43.588 "trtype": "TCP", 00:16:43.588 "adrfam": "IPv4", 00:16:43.588 "traddr": "10.0.0.1", 00:16:43.588 "trsvcid": "52460" 00:16:43.588 }, 00:16:43.588 "auth": { 00:16:43.588 "state": "completed", 00:16:43.588 "digest": "sha384", 00:16:43.588 "dhgroup": "ffdhe3072" 00:16:43.588 } 00:16:43.588 } 00:16:43.588 ]' 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.588 14:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.847 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:43.847 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.415 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.674 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:44.674 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.675 14:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.935 00:16:44.935 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.935 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.935 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.195 { 00:16:45.195 "cntlid": 69, 00:16:45.195 "qid": 0, 00:16:45.195 "state": "enabled", 00:16:45.195 "thread": "nvmf_tgt_poll_group_000", 00:16:45.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:45.195 "listen_address": { 00:16:45.195 "trtype": "TCP", 00:16:45.195 "adrfam": "IPv4", 00:16:45.195 "traddr": "10.0.0.2", 00:16:45.195 "trsvcid": "4420" 00:16:45.195 }, 00:16:45.195 "peer_address": { 00:16:45.195 "trtype": "TCP", 00:16:45.195 "adrfam": "IPv4", 00:16:45.195 "traddr": "10.0.0.1", 00:16:45.195 "trsvcid": "52486" 00:16:45.195 }, 00:16:45.195 "auth": { 00:16:45.195 "state": "completed", 00:16:45.195 "digest": "sha384", 00:16:45.195 "dhgroup": "ffdhe3072" 00:16:45.195 } 00:16:45.195 } 00:16:45.195 ]' 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.195 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.196 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.196 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.456 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:45.456 14:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.026 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.286 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.286 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.546 { 00:16:46.546 "cntlid": 71, 00:16:46.546 "qid": 0, 00:16:46.546 "state": "enabled", 00:16:46.546 "thread": "nvmf_tgt_poll_group_000", 00:16:46.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.546 "listen_address": { 00:16:46.546 "trtype": "TCP", 00:16:46.546 "adrfam": "IPv4", 00:16:46.546 "traddr": "10.0.0.2", 00:16:46.546 "trsvcid": "4420" 00:16:46.546 }, 00:16:46.546 "peer_address": { 00:16:46.546 "trtype": "TCP", 00:16:46.546 "adrfam": "IPv4", 00:16:46.546 "traddr": "10.0.0.1", 00:16:46.546 "trsvcid": "52522" 00:16:46.546 }, 00:16:46.546 "auth": { 00:16:46.546 "state": "completed", 00:16:46.546 "digest": "sha384", 00:16:46.546 "dhgroup": "ffdhe3072" 00:16:46.546 } 00:16:46.546 } 00:16:46.546 ]' 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.546 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.807 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.808 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.808 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.808 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.808 14:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.808 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:46.808 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:47.378 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.638 14:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.897 00:16:47.897 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.897 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.897 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.156 { 00:16:48.156 "cntlid": 73, 00:16:48.156 "qid": 0, 00:16:48.156 "state": "enabled", 00:16:48.156 "thread": "nvmf_tgt_poll_group_000", 00:16:48.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:48.156 "listen_address": { 00:16:48.156 "trtype": "TCP", 00:16:48.156 "adrfam": "IPv4", 00:16:48.156 "traddr": "10.0.0.2", 00:16:48.156 "trsvcid": "4420" 00:16:48.156 }, 00:16:48.156 "peer_address": { 00:16:48.156 "trtype": "TCP", 00:16:48.156 "adrfam": "IPv4", 00:16:48.156 "traddr": "10.0.0.1", 00:16:48.156 "trsvcid": "52546" 00:16:48.156 }, 00:16:48.156 "auth": { 00:16:48.156 "state": "completed", 00:16:48.156 "digest": "sha384", 00:16:48.156 "dhgroup": "ffdhe4096" 00:16:48.156 } 00:16:48.156 } 00:16:48.156 ]' 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.156 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.416 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.416 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.416 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.416 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:48.416 14:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:48.986 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.246 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.506 00:16:49.506 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.506 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.506 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.766 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.766 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.766 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.766 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.767 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.767 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.767 { 00:16:49.767 "cntlid": 75, 00:16:49.767 "qid": 0, 00:16:49.767 "state": "enabled", 00:16:49.767 "thread": "nvmf_tgt_poll_group_000", 00:16:49.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.767 "listen_address": { 00:16:49.767 "trtype": "TCP", 00:16:49.767 "adrfam": "IPv4", 00:16:49.767 "traddr": "10.0.0.2", 00:16:49.767 "trsvcid": "4420" 00:16:49.767 }, 00:16:49.767 "peer_address": { 00:16:49.767 "trtype": "TCP", 00:16:49.767 "adrfam": "IPv4", 00:16:49.767 "traddr": "10.0.0.1", 00:16:49.767 "trsvcid": "52570" 00:16:49.767 }, 00:16:49.767 "auth": { 00:16:49.767 "state": "completed", 00:16:49.767 "digest": "sha384", 00:16:49.767 "dhgroup": "ffdhe4096" 00:16:49.767 } 00:16:49.767 } 00:16:49.767 ]' 00:16:49.767 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.767 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.767 14:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.767 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.767 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.026 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.026 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.026 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.026 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:50.026 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.596 14:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.858 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.118 00:16:51.118 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.118 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.118 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.379 { 00:16:51.379 "cntlid": 77, 00:16:51.379 "qid": 0, 00:16:51.379 "state": "enabled", 00:16:51.379 "thread": "nvmf_tgt_poll_group_000", 00:16:51.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.379 "listen_address": { 00:16:51.379 "trtype": "TCP", 00:16:51.379 "adrfam": "IPv4", 00:16:51.379 "traddr": "10.0.0.2", 00:16:51.379 "trsvcid": "4420" 00:16:51.379 }, 00:16:51.379 "peer_address": { 00:16:51.379 "trtype": "TCP", 00:16:51.379 "adrfam": "IPv4", 00:16:51.379 "traddr": "10.0.0.1", 00:16:51.379 "trsvcid": "49138" 00:16:51.379 }, 00:16:51.379 "auth": { 00:16:51.379 "state": "completed", 00:16:51.379 "digest": "sha384", 00:16:51.379 "dhgroup": "ffdhe4096" 00:16:51.379 } 00:16:51.379 } 00:16:51.379 ]' 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.379 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.639 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:51.639 14:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:52.207 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.207 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.207 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.207 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.207 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.207 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.208 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.208 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.467 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.727 00:16:52.727 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.727 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.727 14:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.987 { 00:16:52.987 "cntlid": 79, 00:16:52.987 "qid": 0, 00:16:52.987 "state": "enabled", 00:16:52.987 "thread": "nvmf_tgt_poll_group_000", 00:16:52.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.987 "listen_address": { 00:16:52.987 "trtype": "TCP", 00:16:52.987 "adrfam": "IPv4", 00:16:52.987 "traddr": "10.0.0.2", 00:16:52.987 "trsvcid": "4420" 00:16:52.987 }, 00:16:52.987 "peer_address": { 00:16:52.987 "trtype": "TCP", 00:16:52.987 "adrfam": "IPv4", 00:16:52.987 "traddr": "10.0.0.1", 00:16:52.987 "trsvcid": "49166" 00:16:52.987 }, 00:16:52.987 "auth": { 00:16:52.987 "state": "completed", 00:16:52.987 "digest": "sha384", 00:16:52.987 "dhgroup": "ffdhe4096" 00:16:52.987 } 00:16:52.987 } 00:16:52.987 ]' 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.987 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.247 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:53.247 14:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.816 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.074 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:54.074 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.074 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.075 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.333 00:16:54.333 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.333 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.333 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.592 { 00:16:54.592 "cntlid": 81, 00:16:54.592 "qid": 0, 00:16:54.592 "state": "enabled", 00:16:54.592 "thread": "nvmf_tgt_poll_group_000", 00:16:54.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.592 "listen_address": { 00:16:54.592 "trtype": "TCP", 00:16:54.592 "adrfam": "IPv4", 00:16:54.592 "traddr": "10.0.0.2", 00:16:54.592 "trsvcid": "4420" 00:16:54.592 }, 00:16:54.592 "peer_address": { 00:16:54.592 "trtype": "TCP", 00:16:54.592 "adrfam": "IPv4", 00:16:54.592 "traddr": "10.0.0.1", 00:16:54.592 "trsvcid": "49208" 00:16:54.592 }, 00:16:54.592 "auth": { 00:16:54.592 "state": "completed", 00:16:54.592 "digest": "sha384", 00:16:54.592 "dhgroup": "ffdhe6144" 00:16:54.592 } 00:16:54.592 } 00:16:54.592 ]' 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.592 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.851 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.851 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.851 14:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.851 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:54.851 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.421 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.681 14:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.940 00:16:55.940 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.940 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.940 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.199 { 00:16:56.199 "cntlid": 83, 00:16:56.199 "qid": 0, 00:16:56.199 "state": "enabled", 00:16:56.199 "thread": "nvmf_tgt_poll_group_000", 00:16:56.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.199 "listen_address": { 00:16:56.199 "trtype": "TCP", 00:16:56.199 "adrfam": "IPv4", 00:16:56.199 "traddr": "10.0.0.2", 00:16:56.199 "trsvcid": "4420" 00:16:56.199 }, 00:16:56.199 "peer_address": { 00:16:56.199 "trtype": "TCP", 00:16:56.199 "adrfam": "IPv4", 00:16:56.199 "traddr": "10.0.0.1", 00:16:56.199 "trsvcid": "49242" 00:16:56.199 }, 00:16:56.199 "auth": { 00:16:56.199 "state": "completed", 00:16:56.199 "digest": "sha384", 00:16:56.199 "dhgroup": "ffdhe6144" 00:16:56.199 } 00:16:56.199 } 00:16:56.199 ]' 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.199 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.459 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.459 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.459 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.459 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:56.459 14:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.029 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:57.298 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:57.298 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.299 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.571 00:16:57.571 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.571 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.571 14:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.829 { 00:16:57.829 "cntlid": 85, 00:16:57.829 "qid": 0, 00:16:57.829 "state": "enabled", 00:16:57.829 "thread": "nvmf_tgt_poll_group_000", 00:16:57.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:57.829 "listen_address": { 00:16:57.829 "trtype": "TCP", 00:16:57.829 "adrfam": "IPv4", 00:16:57.829 "traddr": "10.0.0.2", 00:16:57.829 "trsvcid": "4420" 00:16:57.829 }, 00:16:57.829 "peer_address": { 00:16:57.829 "trtype": "TCP", 00:16:57.829 "adrfam": "IPv4", 00:16:57.829 "traddr": "10.0.0.1", 00:16:57.829 "trsvcid": "49270" 00:16:57.829 }, 00:16:57.829 "auth": { 00:16:57.829 "state": "completed", 00:16:57.829 "digest": "sha384", 00:16:57.829 "dhgroup": "ffdhe6144" 00:16:57.829 } 00:16:57.829 } 00:16:57.829 ]' 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.829 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:58.088 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.025 14:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.025 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.286 00:16:59.286 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.286 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.286 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.546 { 00:16:59.546 "cntlid": 87, 00:16:59.546 "qid": 0, 00:16:59.546 "state": "enabled", 00:16:59.546 "thread": "nvmf_tgt_poll_group_000", 00:16:59.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:59.546 "listen_address": { 00:16:59.546 "trtype": "TCP", 00:16:59.546 "adrfam": "IPv4", 00:16:59.546 "traddr": "10.0.0.2", 00:16:59.546 "trsvcid": "4420" 00:16:59.546 }, 00:16:59.546 "peer_address": { 00:16:59.546 "trtype": "TCP", 00:16:59.546 "adrfam": "IPv4", 00:16:59.546 "traddr": "10.0.0.1", 00:16:59.546 "trsvcid": "49296" 00:16:59.546 }, 00:16:59.546 "auth": { 00:16:59.546 "state": "completed", 00:16:59.546 "digest": "sha384", 00:16:59.546 "dhgroup": "ffdhe6144" 00:16:59.546 } 00:16:59.546 } 00:16:59.546 ]' 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.546 14:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.807 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:16:59.807 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:00.377 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.377 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.377 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.378 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.378 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.378 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.378 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.378 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.378 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.638 14:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.208 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.208 { 00:17:01.208 "cntlid": 89, 00:17:01.208 "qid": 0, 00:17:01.208 "state": "enabled", 00:17:01.208 "thread": "nvmf_tgt_poll_group_000", 00:17:01.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.208 "listen_address": { 00:17:01.208 "trtype": "TCP", 00:17:01.208 "adrfam": "IPv4", 00:17:01.208 "traddr": "10.0.0.2", 00:17:01.208 "trsvcid": "4420" 00:17:01.208 }, 00:17:01.208 "peer_address": { 00:17:01.208 "trtype": "TCP", 00:17:01.208 "adrfam": "IPv4", 00:17:01.208 "traddr": "10.0.0.1", 00:17:01.208 "trsvcid": "45530" 00:17:01.208 }, 00:17:01.208 "auth": { 00:17:01.208 "state": "completed", 00:17:01.208 "digest": "sha384", 00:17:01.208 "dhgroup": "ffdhe8192" 00:17:01.208 } 00:17:01.208 } 00:17:01.208 ]' 00:17:01.208 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.468 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.728 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:01.728 14:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:02.298 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.299 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.559 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.559 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.559 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.559 14:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.820 00:17:02.820 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.820 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.820 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.080 { 00:17:03.080 "cntlid": 91, 00:17:03.080 "qid": 0, 00:17:03.080 "state": "enabled", 00:17:03.080 "thread": "nvmf_tgt_poll_group_000", 00:17:03.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.080 "listen_address": { 00:17:03.080 "trtype": "TCP", 00:17:03.080 "adrfam": "IPv4", 00:17:03.080 "traddr": "10.0.0.2", 00:17:03.080 "trsvcid": "4420" 00:17:03.080 }, 00:17:03.080 "peer_address": { 00:17:03.080 "trtype": "TCP", 00:17:03.080 "adrfam": "IPv4", 00:17:03.080 "traddr": "10.0.0.1", 00:17:03.080 "trsvcid": "45546" 00:17:03.080 }, 00:17:03.080 "auth": { 00:17:03.080 "state": "completed", 00:17:03.080 "digest": "sha384", 00:17:03.080 "dhgroup": "ffdhe8192" 00:17:03.080 } 00:17:03.080 } 00:17:03.080 ]' 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.080 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.340 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.340 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.340 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.340 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:03.340 14:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.911 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.173 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.744 00:17:04.744 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.744 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.744 14:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.744 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.744 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.744 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.744 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.005 { 00:17:05.005 "cntlid": 93, 00:17:05.005 "qid": 0, 00:17:05.005 "state": "enabled", 00:17:05.005 "thread": "nvmf_tgt_poll_group_000", 00:17:05.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.005 "listen_address": { 00:17:05.005 "trtype": "TCP", 00:17:05.005 "adrfam": "IPv4", 00:17:05.005 "traddr": "10.0.0.2", 00:17:05.005 "trsvcid": "4420" 00:17:05.005 }, 00:17:05.005 "peer_address": { 00:17:05.005 "trtype": "TCP", 00:17:05.005 "adrfam": "IPv4", 00:17:05.005 "traddr": "10.0.0.1", 00:17:05.005 "trsvcid": "45580" 00:17:05.005 }, 00:17:05.005 "auth": { 00:17:05.005 "state": "completed", 00:17:05.005 "digest": "sha384", 00:17:05.005 "dhgroup": "ffdhe8192" 00:17:05.005 } 00:17:05.005 } 00:17:05.005 ]' 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.005 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.266 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:05.266 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.837 14:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.098 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.358 00:17:06.358 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.358 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.358 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.618 { 00:17:06.618 "cntlid": 95, 00:17:06.618 "qid": 0, 00:17:06.618 "state": "enabled", 00:17:06.618 "thread": "nvmf_tgt_poll_group_000", 00:17:06.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.618 "listen_address": { 00:17:06.618 "trtype": "TCP", 00:17:06.618 "adrfam": "IPv4", 00:17:06.618 "traddr": "10.0.0.2", 00:17:06.618 "trsvcid": "4420" 00:17:06.618 }, 00:17:06.618 "peer_address": { 00:17:06.618 "trtype": "TCP", 00:17:06.618 "adrfam": "IPv4", 00:17:06.618 "traddr": "10.0.0.1", 00:17:06.618 "trsvcid": "45606" 00:17:06.618 }, 00:17:06.618 "auth": { 00:17:06.618 "state": "completed", 00:17:06.618 "digest": "sha384", 00:17:06.618 "dhgroup": "ffdhe8192" 00:17:06.618 } 00:17:06.618 } 00:17:06.618 ]' 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.618 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.878 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.878 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.878 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.878 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.878 14:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.878 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:06.878 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:07.447 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.709 14:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.969 00:17:07.969 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.969 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.969 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.229 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.229 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.229 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.229 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.229 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.229 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.229 { 00:17:08.229 "cntlid": 97, 00:17:08.229 "qid": 0, 00:17:08.229 "state": "enabled", 00:17:08.229 "thread": "nvmf_tgt_poll_group_000", 00:17:08.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.229 "listen_address": { 00:17:08.229 "trtype": "TCP", 00:17:08.229 "adrfam": "IPv4", 00:17:08.229 "traddr": "10.0.0.2", 00:17:08.229 "trsvcid": "4420" 00:17:08.229 }, 00:17:08.230 "peer_address": { 00:17:08.230 "trtype": "TCP", 00:17:08.230 "adrfam": "IPv4", 00:17:08.230 "traddr": "10.0.0.1", 00:17:08.230 "trsvcid": "45628" 00:17:08.230 }, 00:17:08.230 "auth": { 00:17:08.230 "state": "completed", 00:17:08.230 "digest": "sha512", 00:17:08.230 "dhgroup": "null" 00:17:08.230 } 00:17:08.230 } 00:17:08.230 ]' 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.230 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.490 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:08.490 14:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.059 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.318 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:09.318 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.319 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.579 00:17:09.579 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.579 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.579 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.839 { 00:17:09.839 "cntlid": 99, 00:17:09.839 "qid": 0, 00:17:09.839 "state": "enabled", 00:17:09.839 "thread": "nvmf_tgt_poll_group_000", 00:17:09.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.839 "listen_address": { 00:17:09.839 "trtype": "TCP", 00:17:09.839 "adrfam": "IPv4", 00:17:09.839 "traddr": "10.0.0.2", 00:17:09.839 "trsvcid": "4420" 00:17:09.839 }, 00:17:09.839 "peer_address": { 00:17:09.839 "trtype": "TCP", 00:17:09.839 "adrfam": "IPv4", 00:17:09.839 "traddr": "10.0.0.1", 00:17:09.839 "trsvcid": "45662" 00:17:09.839 }, 00:17:09.839 "auth": { 00:17:09.839 "state": "completed", 00:17:09.839 "digest": "sha512", 00:17:09.839 "dhgroup": "null" 00:17:09.839 } 00:17:09.839 } 00:17:09.839 ]' 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.839 14:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.839 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.839 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.840 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.840 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.840 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.101 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:10.101 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.671 14:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:10.932 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.933 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.193 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.193 { 00:17:11.193 "cntlid": 101, 00:17:11.193 "qid": 0, 00:17:11.193 "state": "enabled", 00:17:11.193 "thread": "nvmf_tgt_poll_group_000", 00:17:11.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.193 "listen_address": { 00:17:11.193 "trtype": "TCP", 00:17:11.193 "adrfam": "IPv4", 00:17:11.193 "traddr": "10.0.0.2", 00:17:11.193 "trsvcid": "4420" 00:17:11.193 }, 00:17:11.193 "peer_address": { 00:17:11.193 "trtype": "TCP", 00:17:11.193 "adrfam": "IPv4", 00:17:11.193 "traddr": "10.0.0.1", 00:17:11.193 "trsvcid": "35968" 00:17:11.193 }, 00:17:11.193 "auth": { 00:17:11.193 "state": "completed", 00:17:11.193 "digest": "sha512", 00:17:11.193 "dhgroup": "null" 00:17:11.193 } 00:17:11.193 } 00:17:11.193 ]' 00:17:11.193 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.454 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.715 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:11.715 14:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.288 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.549 00:17:12.549 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.549 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.549 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.810 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.810 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.810 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.810 14:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.810 { 00:17:12.810 "cntlid": 103, 00:17:12.810 "qid": 0, 00:17:12.810 "state": "enabled", 00:17:12.810 "thread": "nvmf_tgt_poll_group_000", 00:17:12.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.810 "listen_address": { 00:17:12.810 "trtype": "TCP", 00:17:12.810 "adrfam": "IPv4", 00:17:12.810 "traddr": "10.0.0.2", 00:17:12.810 "trsvcid": "4420" 00:17:12.810 }, 00:17:12.810 "peer_address": { 00:17:12.810 "trtype": "TCP", 00:17:12.810 "adrfam": "IPv4", 00:17:12.810 "traddr": "10.0.0.1", 00:17:12.810 "trsvcid": "36002" 00:17:12.810 }, 00:17:12.810 "auth": { 00:17:12.810 "state": "completed", 00:17:12.810 "digest": "sha512", 00:17:12.810 "dhgroup": "null" 00:17:12.810 } 00:17:12.810 } 00:17:12.810 ]' 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:12.810 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.071 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.071 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.071 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.071 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:13.071 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.641 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.642 14:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.902 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.162 00:17:14.162 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.162 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.162 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.422 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.423 { 00:17:14.423 "cntlid": 105, 00:17:14.423 "qid": 0, 00:17:14.423 "state": "enabled", 00:17:14.423 "thread": "nvmf_tgt_poll_group_000", 00:17:14.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.423 "listen_address": { 00:17:14.423 "trtype": "TCP", 00:17:14.423 "adrfam": "IPv4", 00:17:14.423 "traddr": "10.0.0.2", 00:17:14.423 "trsvcid": "4420" 00:17:14.423 }, 00:17:14.423 "peer_address": { 00:17:14.423 "trtype": "TCP", 00:17:14.423 "adrfam": "IPv4", 00:17:14.423 "traddr": "10.0.0.1", 00:17:14.423 "trsvcid": "36032" 00:17:14.423 }, 00:17:14.423 "auth": { 00:17:14.423 "state": "completed", 00:17:14.423 "digest": "sha512", 00:17:14.423 "dhgroup": "ffdhe2048" 00:17:14.423 } 00:17:14.423 } 00:17:14.423 ]' 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.423 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.684 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:14.684 14:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.255 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.516 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.777 00:17:15.777 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.777 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.777 14:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.037 { 00:17:16.037 "cntlid": 107, 00:17:16.037 "qid": 0, 00:17:16.037 "state": "enabled", 00:17:16.037 "thread": "nvmf_tgt_poll_group_000", 00:17:16.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.037 "listen_address": { 00:17:16.037 "trtype": "TCP", 00:17:16.037 "adrfam": "IPv4", 00:17:16.037 "traddr": "10.0.0.2", 00:17:16.037 "trsvcid": "4420" 00:17:16.037 }, 00:17:16.037 "peer_address": { 00:17:16.037 "trtype": "TCP", 00:17:16.037 "adrfam": "IPv4", 00:17:16.037 "traddr": "10.0.0.1", 00:17:16.037 "trsvcid": "36064" 00:17:16.037 }, 00:17:16.037 "auth": { 00:17:16.037 "state": "completed", 00:17:16.037 "digest": "sha512", 00:17:16.037 "dhgroup": "ffdhe2048" 00:17:16.037 } 00:17:16.037 } 00:17:16.037 ]' 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.037 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.298 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:16.298 14:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.869 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.129 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.388 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.388 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.647 { 00:17:17.647 "cntlid": 109, 00:17:17.647 "qid": 0, 00:17:17.647 "state": "enabled", 00:17:17.647 "thread": "nvmf_tgt_poll_group_000", 00:17:17.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.647 "listen_address": { 00:17:17.647 "trtype": "TCP", 00:17:17.647 "adrfam": "IPv4", 00:17:17.647 "traddr": "10.0.0.2", 00:17:17.647 "trsvcid": "4420" 00:17:17.647 }, 00:17:17.647 "peer_address": { 00:17:17.647 "trtype": "TCP", 00:17:17.647 "adrfam": "IPv4", 00:17:17.647 "traddr": "10.0.0.1", 00:17:17.647 "trsvcid": "36082" 00:17:17.647 }, 00:17:17.647 "auth": { 00:17:17.647 "state": "completed", 00:17:17.647 "digest": "sha512", 00:17:17.647 "dhgroup": "ffdhe2048" 00:17:17.647 } 00:17:17.647 } 00:17:17.647 ]' 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.647 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.648 14:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.907 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:17.907 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.476 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.737 14:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.737 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.997 { 00:17:18.997 "cntlid": 111, 00:17:18.997 "qid": 0, 00:17:18.997 "state": "enabled", 00:17:18.997 "thread": "nvmf_tgt_poll_group_000", 00:17:18.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.997 "listen_address": { 00:17:18.997 "trtype": "TCP", 00:17:18.997 "adrfam": "IPv4", 00:17:18.997 "traddr": "10.0.0.2", 00:17:18.997 "trsvcid": "4420" 00:17:18.997 }, 00:17:18.997 "peer_address": { 00:17:18.997 "trtype": "TCP", 00:17:18.997 "adrfam": "IPv4", 00:17:18.997 "traddr": "10.0.0.1", 00:17:18.997 "trsvcid": "36106" 00:17:18.997 }, 00:17:18.997 "auth": { 00:17:18.997 "state": "completed", 00:17:18.997 "digest": "sha512", 00:17:18.997 "dhgroup": "ffdhe2048" 00:17:18.997 } 00:17:18.997 } 00:17:18.997 ]' 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.997 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:19.256 14:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:20.194 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.195 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.455 00:17:20.455 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.455 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.455 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.716 { 00:17:20.716 "cntlid": 113, 00:17:20.716 "qid": 0, 00:17:20.716 "state": "enabled", 00:17:20.716 "thread": "nvmf_tgt_poll_group_000", 00:17:20.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.716 "listen_address": { 00:17:20.716 "trtype": "TCP", 00:17:20.716 "adrfam": "IPv4", 00:17:20.716 "traddr": "10.0.0.2", 00:17:20.716 "trsvcid": "4420" 00:17:20.716 }, 00:17:20.716 "peer_address": { 00:17:20.716 "trtype": "TCP", 00:17:20.716 "adrfam": "IPv4", 00:17:20.716 "traddr": "10.0.0.1", 00:17:20.716 "trsvcid": "36144" 00:17:20.716 }, 00:17:20.716 "auth": { 00:17:20.716 "state": "completed", 00:17:20.716 "digest": "sha512", 00:17:20.716 "dhgroup": "ffdhe3072" 00:17:20.716 } 00:17:20.716 } 00:17:20.716 ]' 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.716 14:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.977 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:20.977 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.549 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.810 14:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.071 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.071 { 00:17:22.071 "cntlid": 115, 00:17:22.071 "qid": 0, 00:17:22.071 "state": "enabled", 00:17:22.071 "thread": "nvmf_tgt_poll_group_000", 00:17:22.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.071 "listen_address": { 00:17:22.071 "trtype": "TCP", 00:17:22.071 "adrfam": "IPv4", 00:17:22.071 "traddr": "10.0.0.2", 00:17:22.071 "trsvcid": "4420" 00:17:22.071 }, 00:17:22.071 "peer_address": { 00:17:22.071 "trtype": "TCP", 00:17:22.071 "adrfam": "IPv4", 00:17:22.071 "traddr": "10.0.0.1", 00:17:22.071 "trsvcid": "56954" 00:17:22.071 }, 00:17:22.071 "auth": { 00:17:22.071 "state": "completed", 00:17:22.071 "digest": "sha512", 00:17:22.071 "dhgroup": "ffdhe3072" 00:17:22.071 } 00:17:22.071 } 00:17:22.071 ]' 00:17:22.071 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.332 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.333 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.333 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.333 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.333 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.333 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.333 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.594 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:22.594 14:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.165 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.166 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.166 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.166 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.166 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.166 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.166 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.426 00:17:23.426 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.426 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.426 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.686 { 00:17:23.686 "cntlid": 117, 00:17:23.686 "qid": 0, 00:17:23.686 "state": "enabled", 00:17:23.686 "thread": "nvmf_tgt_poll_group_000", 00:17:23.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.686 "listen_address": { 00:17:23.686 "trtype": "TCP", 00:17:23.686 "adrfam": "IPv4", 00:17:23.686 "traddr": "10.0.0.2", 00:17:23.686 "trsvcid": "4420" 00:17:23.686 }, 00:17:23.686 "peer_address": { 00:17:23.686 "trtype": "TCP", 00:17:23.686 "adrfam": "IPv4", 00:17:23.686 "traddr": "10.0.0.1", 00:17:23.686 "trsvcid": "56990" 00:17:23.686 }, 00:17:23.686 "auth": { 00:17:23.686 "state": "completed", 00:17:23.686 "digest": "sha512", 00:17:23.686 "dhgroup": "ffdhe3072" 00:17:23.686 } 00:17:23.686 } 00:17:23.686 ]' 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.686 14:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.946 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.946 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.946 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.946 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:23.946 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:24.517 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.517 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.778 14:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.778 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.778 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.778 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.778 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.040 00:17:25.040 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.040 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.040 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.301 { 00:17:25.301 "cntlid": 119, 00:17:25.301 "qid": 0, 00:17:25.301 "state": "enabled", 00:17:25.301 "thread": "nvmf_tgt_poll_group_000", 00:17:25.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.301 "listen_address": { 00:17:25.301 "trtype": "TCP", 00:17:25.301 "adrfam": "IPv4", 00:17:25.301 "traddr": "10.0.0.2", 00:17:25.301 "trsvcid": "4420" 00:17:25.301 }, 00:17:25.301 "peer_address": { 00:17:25.301 "trtype": "TCP", 00:17:25.301 "adrfam": "IPv4", 00:17:25.301 "traddr": "10.0.0.1", 00:17:25.301 "trsvcid": "57014" 00:17:25.301 }, 00:17:25.301 "auth": { 00:17:25.301 "state": "completed", 00:17:25.301 "digest": "sha512", 00:17:25.301 "dhgroup": "ffdhe3072" 00:17:25.301 } 00:17:25.301 } 00:17:25.301 ]' 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.301 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.562 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:25.562 14:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.133 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.393 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.655 00:17:26.655 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.655 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.655 14:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.916 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.916 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.916 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.916 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.916 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.916 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.916 { 00:17:26.916 "cntlid": 121, 00:17:26.916 "qid": 0, 00:17:26.916 "state": "enabled", 00:17:26.916 "thread": "nvmf_tgt_poll_group_000", 00:17:26.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.916 "listen_address": { 00:17:26.916 "trtype": "TCP", 00:17:26.916 "adrfam": "IPv4", 00:17:26.916 "traddr": "10.0.0.2", 00:17:26.916 "trsvcid": "4420" 00:17:26.916 }, 00:17:26.916 "peer_address": { 00:17:26.916 "trtype": "TCP", 00:17:26.916 "adrfam": "IPv4", 00:17:26.916 "traddr": "10.0.0.1", 00:17:26.916 "trsvcid": "57048" 00:17:26.916 }, 00:17:26.916 "auth": { 00:17:26.916 "state": "completed", 00:17:26.916 "digest": "sha512", 00:17:26.916 "dhgroup": "ffdhe4096" 00:17:26.916 } 00:17:26.916 } 00:17:26.916 ]' 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.917 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.177 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:27.177 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.749 14:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.009 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.270 00:17:28.270 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.270 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.270 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.531 { 00:17:28.531 "cntlid": 123, 00:17:28.531 "qid": 0, 00:17:28.531 "state": "enabled", 00:17:28.531 "thread": "nvmf_tgt_poll_group_000", 00:17:28.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.531 "listen_address": { 00:17:28.531 "trtype": "TCP", 00:17:28.531 "adrfam": "IPv4", 00:17:28.531 "traddr": "10.0.0.2", 00:17:28.531 "trsvcid": "4420" 00:17:28.531 }, 00:17:28.531 "peer_address": { 00:17:28.531 "trtype": "TCP", 00:17:28.531 "adrfam": "IPv4", 00:17:28.531 "traddr": "10.0.0.1", 00:17:28.531 "trsvcid": "57076" 00:17:28.531 }, 00:17:28.531 "auth": { 00:17:28.531 "state": "completed", 00:17:28.531 "digest": "sha512", 00:17:28.531 "dhgroup": "ffdhe4096" 00:17:28.531 } 00:17:28.531 } 00:17:28.531 ]' 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.531 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.792 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:28.792 14:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:29.361 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.361 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.362 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.362 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.362 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.362 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.362 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.362 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.622 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.623 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.623 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.623 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.883 00:17:29.884 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.884 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.884 14:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.884 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.884 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.884 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.884 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.884 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.884 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.884 { 00:17:29.884 "cntlid": 125, 00:17:29.884 "qid": 0, 00:17:29.884 "state": "enabled", 00:17:29.884 "thread": "nvmf_tgt_poll_group_000", 00:17:29.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.884 "listen_address": { 00:17:29.884 "trtype": "TCP", 00:17:29.884 "adrfam": "IPv4", 00:17:29.884 "traddr": "10.0.0.2", 00:17:29.884 "trsvcid": "4420" 00:17:29.884 }, 00:17:29.884 "peer_address": { 00:17:29.884 "trtype": "TCP", 00:17:29.884 "adrfam": "IPv4", 00:17:29.884 "traddr": "10.0.0.1", 00:17:29.884 "trsvcid": "57104" 00:17:29.884 }, 00:17:29.884 "auth": { 00:17:29.884 "state": "completed", 00:17:29.884 "digest": "sha512", 00:17:29.884 "dhgroup": "ffdhe4096" 00:17:29.884 } 00:17:29.884 } 00:17:29.884 ]' 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.144 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.405 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:30.405 14:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.977 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.239 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.500 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.500 { 00:17:31.500 "cntlid": 127, 00:17:31.500 "qid": 0, 00:17:31.500 "state": "enabled", 00:17:31.500 "thread": "nvmf_tgt_poll_group_000", 00:17:31.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.500 "listen_address": { 00:17:31.500 "trtype": "TCP", 00:17:31.500 "adrfam": "IPv4", 00:17:31.500 "traddr": "10.0.0.2", 00:17:31.500 "trsvcid": "4420" 00:17:31.500 }, 00:17:31.500 "peer_address": { 00:17:31.500 "trtype": "TCP", 00:17:31.500 "adrfam": "IPv4", 00:17:31.500 "traddr": "10.0.0.1", 00:17:31.500 "trsvcid": "55988" 00:17:31.500 }, 00:17:31.500 "auth": { 00:17:31.500 "state": "completed", 00:17:31.500 "digest": "sha512", 00:17:31.500 "dhgroup": "ffdhe4096" 00:17:31.500 } 00:17:31.500 } 00:17:31.500 ]' 00:17:31.500 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.760 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.761 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.761 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.761 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.761 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.761 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.761 14:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.020 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:32.021 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.591 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.592 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.592 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.592 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.852 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.852 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.852 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.852 14:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.113 00:17:33.113 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.113 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.113 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.373 { 00:17:33.373 "cntlid": 129, 00:17:33.373 "qid": 0, 00:17:33.373 "state": "enabled", 00:17:33.373 "thread": "nvmf_tgt_poll_group_000", 00:17:33.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.373 "listen_address": { 00:17:33.373 "trtype": "TCP", 00:17:33.373 "adrfam": "IPv4", 00:17:33.373 "traddr": "10.0.0.2", 00:17:33.373 "trsvcid": "4420" 00:17:33.373 }, 00:17:33.373 "peer_address": { 00:17:33.373 "trtype": "TCP", 00:17:33.373 "adrfam": "IPv4", 00:17:33.373 "traddr": "10.0.0.1", 00:17:33.373 "trsvcid": "56014" 00:17:33.373 }, 00:17:33.373 "auth": { 00:17:33.373 "state": "completed", 00:17:33.373 "digest": "sha512", 00:17:33.373 "dhgroup": "ffdhe6144" 00:17:33.373 } 00:17:33.373 } 00:17:33.373 ]' 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.373 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.634 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:33.634 14:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.205 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.464 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.464 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.464 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.465 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.724 00:17:34.724 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.724 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.724 14:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.984 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.984 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.984 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.984 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.984 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.984 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.984 { 00:17:34.984 "cntlid": 131, 00:17:34.984 "qid": 0, 00:17:34.984 "state": "enabled", 00:17:34.984 "thread": "nvmf_tgt_poll_group_000", 00:17:34.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.984 "listen_address": { 00:17:34.984 "trtype": "TCP", 00:17:34.984 "adrfam": "IPv4", 00:17:34.984 "traddr": "10.0.0.2", 00:17:34.984 "trsvcid": "4420" 00:17:34.984 }, 00:17:34.984 "peer_address": { 00:17:34.984 "trtype": "TCP", 00:17:34.984 "adrfam": "IPv4", 00:17:34.984 "traddr": "10.0.0.1", 00:17:34.985 "trsvcid": "56038" 00:17:34.985 }, 00:17:34.985 "auth": { 00:17:34.985 "state": "completed", 00:17:34.985 "digest": "sha512", 00:17:34.985 "dhgroup": "ffdhe6144" 00:17:34.985 } 00:17:34.985 } 00:17:34.985 ]' 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.985 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.245 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:35.245 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:35.818 14:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.102 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.364 00:17:36.364 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.364 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.364 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.626 { 00:17:36.626 "cntlid": 133, 00:17:36.626 "qid": 0, 00:17:36.626 "state": "enabled", 00:17:36.626 "thread": "nvmf_tgt_poll_group_000", 00:17:36.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.626 "listen_address": { 00:17:36.626 "trtype": "TCP", 00:17:36.626 "adrfam": "IPv4", 00:17:36.626 "traddr": "10.0.0.2", 00:17:36.626 "trsvcid": "4420" 00:17:36.626 }, 00:17:36.626 "peer_address": { 00:17:36.626 "trtype": "TCP", 00:17:36.626 "adrfam": "IPv4", 00:17:36.626 "traddr": "10.0.0.1", 00:17:36.626 "trsvcid": "56052" 00:17:36.626 }, 00:17:36.626 "auth": { 00:17:36.626 "state": "completed", 00:17:36.626 "digest": "sha512", 00:17:36.626 "dhgroup": "ffdhe6144" 00:17:36.626 } 00:17:36.626 } 00:17:36.626 ]' 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.626 14:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.887 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:36.887 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.459 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.721 14:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.982 00:17:37.982 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.982 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.982 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.244 { 00:17:38.244 "cntlid": 135, 00:17:38.244 "qid": 0, 00:17:38.244 "state": "enabled", 00:17:38.244 "thread": "nvmf_tgt_poll_group_000", 00:17:38.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.244 "listen_address": { 00:17:38.244 "trtype": "TCP", 00:17:38.244 "adrfam": "IPv4", 00:17:38.244 "traddr": "10.0.0.2", 00:17:38.244 "trsvcid": "4420" 00:17:38.244 }, 00:17:38.244 "peer_address": { 00:17:38.244 "trtype": "TCP", 00:17:38.244 "adrfam": "IPv4", 00:17:38.244 "traddr": "10.0.0.1", 00:17:38.244 "trsvcid": "56092" 00:17:38.244 }, 00:17:38.244 "auth": { 00:17:38.244 "state": "completed", 00:17:38.244 "digest": "sha512", 00:17:38.244 "dhgroup": "ffdhe6144" 00:17:38.244 } 00:17:38.244 } 00:17:38.244 ]' 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.244 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.506 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:38.506 14:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.077 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.337 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.907 00:17:39.907 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.907 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.907 14:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.907 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.907 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.907 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.908 { 00:17:39.908 "cntlid": 137, 00:17:39.908 "qid": 0, 00:17:39.908 "state": "enabled", 00:17:39.908 "thread": "nvmf_tgt_poll_group_000", 00:17:39.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.908 "listen_address": { 00:17:39.908 "trtype": "TCP", 00:17:39.908 "adrfam": "IPv4", 00:17:39.908 "traddr": "10.0.0.2", 00:17:39.908 "trsvcid": "4420" 00:17:39.908 }, 00:17:39.908 "peer_address": { 00:17:39.908 "trtype": "TCP", 00:17:39.908 "adrfam": "IPv4", 00:17:39.908 "traddr": "10.0.0.1", 00:17:39.908 "trsvcid": "56122" 00:17:39.908 }, 00:17:39.908 "auth": { 00:17:39.908 "state": "completed", 00:17:39.908 "digest": "sha512", 00:17:39.908 "dhgroup": "ffdhe8192" 00:17:39.908 } 00:17:39.908 } 00:17:39.908 ]' 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.908 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.168 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.168 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.168 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.168 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:40.168 14:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:40.739 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.739 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.739 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.739 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.005 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.687 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.687 { 00:17:41.687 "cntlid": 139, 00:17:41.687 "qid": 0, 00:17:41.687 "state": "enabled", 00:17:41.687 "thread": "nvmf_tgt_poll_group_000", 00:17:41.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.687 "listen_address": { 00:17:41.687 "trtype": "TCP", 00:17:41.687 "adrfam": "IPv4", 00:17:41.687 "traddr": "10.0.0.2", 00:17:41.687 "trsvcid": "4420" 00:17:41.687 }, 00:17:41.687 "peer_address": { 00:17:41.687 "trtype": "TCP", 00:17:41.687 "adrfam": "IPv4", 00:17:41.687 "traddr": "10.0.0.1", 00:17:41.687 "trsvcid": "52384" 00:17:41.687 }, 00:17:41.687 "auth": { 00:17:41.687 "state": "completed", 00:17:41.687 "digest": "sha512", 00:17:41.687 "dhgroup": "ffdhe8192" 00:17:41.687 } 00:17:41.687 } 00:17:41.687 ]' 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.687 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.688 14:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.003 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.003 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.003 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.003 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:42.003 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: --dhchap-ctrl-secret DHHC-1:02:NDMyZTZjMDY2MmNmZDk4NzRkMTBkYWQ4NmJjNzJhMTRkNThhYTdkMjhhOTBkY2U16TAMZg==: 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.601 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.862 14:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.433 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.433 { 00:17:43.433 "cntlid": 141, 00:17:43.433 "qid": 0, 00:17:43.433 "state": "enabled", 00:17:43.433 "thread": "nvmf_tgt_poll_group_000", 00:17:43.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.433 "listen_address": { 00:17:43.433 "trtype": "TCP", 00:17:43.433 "adrfam": "IPv4", 00:17:43.433 "traddr": "10.0.0.2", 00:17:43.433 "trsvcid": "4420" 00:17:43.433 }, 00:17:43.433 "peer_address": { 00:17:43.433 "trtype": "TCP", 00:17:43.433 "adrfam": "IPv4", 00:17:43.433 "traddr": "10.0.0.1", 00:17:43.433 "trsvcid": "52398" 00:17:43.433 }, 00:17:43.433 "auth": { 00:17:43.433 "state": "completed", 00:17:43.433 "digest": "sha512", 00:17:43.433 "dhgroup": "ffdhe8192" 00:17:43.433 } 00:17:43.433 } 00:17:43.433 ]' 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.433 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.693 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.693 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.693 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.693 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:43.693 14:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:01:YjM4Y2NjMGJmYWFmNGZhMTVmZTEyZjg4MmI2MmE5NzXMhAqZ: 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.262 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.523 14:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.095 00:17:45.095 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.095 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.095 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.356 { 00:17:45.356 "cntlid": 143, 00:17:45.356 "qid": 0, 00:17:45.356 "state": "enabled", 00:17:45.356 "thread": "nvmf_tgt_poll_group_000", 00:17:45.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.356 "listen_address": { 00:17:45.356 "trtype": "TCP", 00:17:45.356 "adrfam": "IPv4", 00:17:45.356 "traddr": "10.0.0.2", 00:17:45.356 "trsvcid": "4420" 00:17:45.356 }, 00:17:45.356 "peer_address": { 00:17:45.356 "trtype": "TCP", 00:17:45.356 "adrfam": "IPv4", 00:17:45.356 "traddr": "10.0.0.1", 00:17:45.356 "trsvcid": "52420" 00:17:45.356 }, 00:17:45.356 "auth": { 00:17:45.356 "state": "completed", 00:17:45.356 "digest": "sha512", 00:17:45.356 "dhgroup": "ffdhe8192" 00:17:45.356 } 00:17:45.356 } 00:17:45.356 ]' 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.356 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.617 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:45.617 14:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:46.214 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.214 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.214 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.214 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.214 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.214 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:46.215 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:46.215 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:46.215 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:46.215 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:46.215 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.476 14:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.737 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.997 { 00:17:46.997 "cntlid": 145, 00:17:46.997 "qid": 0, 00:17:46.997 "state": "enabled", 00:17:46.997 "thread": "nvmf_tgt_poll_group_000", 00:17:46.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.997 "listen_address": { 00:17:46.997 "trtype": "TCP", 00:17:46.997 "adrfam": "IPv4", 00:17:46.997 "traddr": "10.0.0.2", 00:17:46.997 "trsvcid": "4420" 00:17:46.997 }, 00:17:46.997 "peer_address": { 00:17:46.997 "trtype": "TCP", 00:17:46.997 "adrfam": "IPv4", 00:17:46.997 "traddr": "10.0.0.1", 00:17:46.997 "trsvcid": "52436" 00:17:46.997 }, 00:17:46.997 "auth": { 00:17:46.997 "state": "completed", 00:17:46.997 "digest": "sha512", 00:17:46.997 "dhgroup": "ffdhe8192" 00:17:46.997 } 00:17:46.997 } 00:17:46.997 ]' 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.997 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:47.258 14:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzJlNzYwMTYzMzhmNWY4NDIyM2JlMzk4NWNjZjNjNDg0N2I3NDEyNmNhMWVlM2ExCdbh8w==: --dhchap-ctrl-secret DHHC-1:03:OTg4MWVjMzI1NDAwMzA5M2VjMGNhYzI4YWViZGFlOGJmZTU3NDMyMTVjOGM5NmU5YmFhZTExMzRiZjk5NmNlMc0Does=: 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:48.201 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:48.461 request: 00:17:48.461 { 00:17:48.461 "name": "nvme0", 00:17:48.461 "trtype": "tcp", 00:17:48.461 "traddr": "10.0.0.2", 00:17:48.461 "adrfam": "ipv4", 00:17:48.461 "trsvcid": "4420", 00:17:48.461 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.462 "prchk_reftag": false, 00:17:48.462 "prchk_guard": false, 00:17:48.462 "hdgst": false, 00:17:48.462 "ddgst": false, 00:17:48.462 "dhchap_key": "key2", 00:17:48.462 "allow_unrecognized_csi": false, 00:17:48.462 "method": "bdev_nvme_attach_controller", 00:17:48.462 "req_id": 1 00:17:48.462 } 00:17:48.462 Got JSON-RPC error response 00:17:48.462 response: 00:17:48.462 { 00:17:48.462 "code": -5, 00:17:48.462 "message": "Input/output error" 00:17:48.462 } 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:48.462 14:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:49.033 request: 00:17:49.033 { 00:17:49.033 "name": "nvme0", 00:17:49.033 "trtype": "tcp", 00:17:49.033 "traddr": "10.0.0.2", 00:17:49.033 "adrfam": "ipv4", 00:17:49.033 "trsvcid": "4420", 00:17:49.033 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:49.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.033 "prchk_reftag": false, 00:17:49.033 "prchk_guard": false, 00:17:49.033 "hdgst": false, 00:17:49.033 "ddgst": false, 00:17:49.033 "dhchap_key": "key1", 00:17:49.033 "dhchap_ctrlr_key": "ckey2", 00:17:49.033 "allow_unrecognized_csi": false, 00:17:49.033 "method": "bdev_nvme_attach_controller", 00:17:49.033 "req_id": 1 00:17:49.033 } 00:17:49.033 Got JSON-RPC error response 00:17:49.033 response: 00:17:49.033 { 00:17:49.033 "code": -5, 00:17:49.033 "message": "Input/output error" 00:17:49.033 } 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.033 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.293 request: 00:17:49.293 { 00:17:49.293 "name": "nvme0", 00:17:49.293 "trtype": "tcp", 00:17:49.293 "traddr": "10.0.0.2", 00:17:49.293 "adrfam": "ipv4", 00:17:49.293 "trsvcid": "4420", 00:17:49.293 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:49.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.293 "prchk_reftag": false, 00:17:49.293 "prchk_guard": false, 00:17:49.293 "hdgst": false, 00:17:49.293 "ddgst": false, 00:17:49.293 "dhchap_key": "key1", 00:17:49.293 "dhchap_ctrlr_key": "ckey1", 00:17:49.293 "allow_unrecognized_csi": false, 00:17:49.293 "method": "bdev_nvme_attach_controller", 00:17:49.293 "req_id": 1 00:17:49.293 } 00:17:49.293 Got JSON-RPC error response 00:17:49.293 response: 00:17:49.293 { 00:17:49.293 "code": -5, 00:17:49.293 "message": "Input/output error" 00:17:49.293 } 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2697977 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2697977 ']' 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2697977 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.294 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697977 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697977' 00:17:49.555 killing process with pid 2697977 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2697977 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2697977 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2723407 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2723407 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2723407 ']' 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.555 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.556 14:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2723407 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2723407 ']' 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.498 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 null0 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eLZ 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.97R ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.97R 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Kyn 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.ssA ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ssA 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.FTr 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.6Sq ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Sq 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Sd5 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.759 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.701 nvme0n1 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.702 { 00:17:51.702 "cntlid": 1, 00:17:51.702 "qid": 0, 00:17:51.702 "state": "enabled", 00:17:51.702 "thread": "nvmf_tgt_poll_group_000", 00:17:51.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.702 "listen_address": { 00:17:51.702 "trtype": "TCP", 00:17:51.702 "adrfam": "IPv4", 00:17:51.702 "traddr": "10.0.0.2", 00:17:51.702 "trsvcid": "4420" 00:17:51.702 }, 00:17:51.702 "peer_address": { 00:17:51.702 "trtype": "TCP", 00:17:51.702 "adrfam": "IPv4", 00:17:51.702 "traddr": "10.0.0.1", 00:17:51.702 "trsvcid": "48412" 00:17:51.702 }, 00:17:51.702 "auth": { 00:17:51.702 "state": "completed", 00:17:51.702 "digest": "sha512", 00:17:51.702 "dhgroup": "ffdhe8192" 00:17:51.702 } 00:17:51.702 } 00:17:51.702 ]' 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.702 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.962 14:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.963 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.963 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.963 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.963 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.963 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.223 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:52.223 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:52.797 14:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:53.058 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:53.058 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:53.058 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:53.058 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:53.058 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.058 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.059 request: 00:17:53.059 { 00:17:53.059 "name": "nvme0", 00:17:53.059 "trtype": "tcp", 00:17:53.059 "traddr": "10.0.0.2", 00:17:53.059 "adrfam": "ipv4", 00:17:53.059 "trsvcid": "4420", 00:17:53.059 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.059 "prchk_reftag": false, 00:17:53.059 "prchk_guard": false, 00:17:53.059 "hdgst": false, 00:17:53.059 "ddgst": false, 00:17:53.059 "dhchap_key": "key3", 00:17:53.059 "allow_unrecognized_csi": false, 00:17:53.059 "method": "bdev_nvme_attach_controller", 00:17:53.059 "req_id": 1 00:17:53.059 } 00:17:53.059 Got JSON-RPC error response 00:17:53.059 response: 00:17:53.059 { 00:17:53.059 "code": -5, 00:17:53.059 "message": "Input/output error" 00:17:53.059 } 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:53.059 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.331 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.331 request: 00:17:53.331 { 00:17:53.331 "name": "nvme0", 00:17:53.331 "trtype": "tcp", 00:17:53.331 "traddr": "10.0.0.2", 00:17:53.331 "adrfam": "ipv4", 00:17:53.331 "trsvcid": "4420", 00:17:53.331 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.331 "prchk_reftag": false, 00:17:53.331 "prchk_guard": false, 00:17:53.331 "hdgst": false, 00:17:53.331 "ddgst": false, 00:17:53.331 "dhchap_key": "key3", 00:17:53.331 "allow_unrecognized_csi": false, 00:17:53.331 "method": "bdev_nvme_attach_controller", 00:17:53.331 "req_id": 1 00:17:53.331 } 00:17:53.331 Got JSON-RPC error response 00:17:53.331 response: 00:17:53.331 { 00:17:53.331 "code": -5, 00:17:53.331 "message": "Input/output error" 00:17:53.331 } 00:17:53.332 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:53.591 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.592 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:53.592 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:53.592 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.592 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.592 14:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:53.852 request: 00:17:53.852 { 00:17:53.852 "name": "nvme0", 00:17:53.852 "trtype": "tcp", 00:17:53.852 "traddr": "10.0.0.2", 00:17:53.852 "adrfam": "ipv4", 00:17:53.852 "trsvcid": "4420", 00:17:53.852 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.852 "prchk_reftag": false, 00:17:53.852 "prchk_guard": false, 00:17:53.852 "hdgst": false, 00:17:53.852 "ddgst": false, 00:17:53.852 "dhchap_key": "key0", 00:17:53.852 "dhchap_ctrlr_key": "key1", 00:17:53.852 "allow_unrecognized_csi": false, 00:17:53.852 "method": "bdev_nvme_attach_controller", 00:17:53.852 "req_id": 1 00:17:53.852 } 00:17:53.852 Got JSON-RPC error response 00:17:53.852 response: 00:17:53.852 { 00:17:53.852 "code": -5, 00:17:53.852 "message": "Input/output error" 00:17:53.852 } 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:54.112 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:54.112 nvme0n1 00:17:54.372 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:54.372 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:54.372 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.372 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.372 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.372 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:54.633 14:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:55.576 nvme0n1 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:55.576 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.837 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.838 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:55.838 14:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: --dhchap-ctrl-secret DHHC-1:03:OGZhMGM3M2ViNWMzYmRlYTQzMzJhZTVlMTExYWFmOGM5YTNmZGZhMjk4MjVhNTI5NDViZDc3YmFiMGI5YTAzMq6Qe3s=: 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:56.410 14:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:56.982 request: 00:17:56.982 { 00:17:56.982 "name": "nvme0", 00:17:56.982 "trtype": "tcp", 00:17:56.982 "traddr": "10.0.0.2", 00:17:56.982 "adrfam": "ipv4", 00:17:56.982 "trsvcid": "4420", 00:17:56.982 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.982 "prchk_reftag": false, 00:17:56.982 "prchk_guard": false, 00:17:56.982 "hdgst": false, 00:17:56.982 "ddgst": false, 00:17:56.982 "dhchap_key": "key1", 00:17:56.982 "allow_unrecognized_csi": false, 00:17:56.982 "method": "bdev_nvme_attach_controller", 00:17:56.982 "req_id": 1 00:17:56.982 } 00:17:56.982 Got JSON-RPC error response 00:17:56.982 response: 00:17:56.982 { 00:17:56.982 "code": -5, 00:17:56.982 "message": "Input/output error" 00:17:56.982 } 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:56.982 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:57.926 nvme0n1 00:17:57.926 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:57.926 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:57.926 14:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.926 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.926 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.926 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.186 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.187 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.187 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.187 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.187 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:58.187 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:58.187 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:58.448 nvme0n1 00:17:58.448 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:58.448 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:58.448 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.448 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.448 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.448 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: '' 2s 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: ]] 00:17:58.709 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDZjOGMxZDQ1M2Q5ZGJlZWRkMTY0Zjk2MjEyNGE4MWMT94Uv: 00:17:58.710 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:58.710 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:58.710 14:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:00.619 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:00.619 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:00.619 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:00.619 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:00.619 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:00.619 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: 2s 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: ]] 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTY2MDFmYTJmYWMyYWQxNmUxMTI5NjA5OGUzZWRhYTRmYzhkY2JmYTA2YWZkNGQ2BpQKCA==: 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:00.880 14:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.794 14:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:02.794 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.794 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.794 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.794 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.794 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.794 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.738 nvme0n1 00:18:03.738 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.738 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.738 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.738 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.738 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.738 14:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:03.997 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:03.997 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:03.997 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:04.257 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:04.517 14:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:05.088 request: 00:18:05.088 { 00:18:05.088 "name": "nvme0", 00:18:05.088 "dhchap_key": "key1", 00:18:05.088 "dhchap_ctrlr_key": "key3", 00:18:05.088 "method": "bdev_nvme_set_keys", 00:18:05.088 "req_id": 1 00:18:05.088 } 00:18:05.088 Got JSON-RPC error response 00:18:05.088 response: 00:18:05.088 { 00:18:05.088 "code": -13, 00:18:05.088 "message": "Permission denied" 00:18:05.088 } 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:05.088 14:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:06.469 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:06.470 14:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:07.042 nvme0n1 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.042 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.043 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:07.614 request: 00:18:07.615 { 00:18:07.615 "name": "nvme0", 00:18:07.615 "dhchap_key": "key2", 00:18:07.615 "dhchap_ctrlr_key": "key0", 00:18:07.615 "method": "bdev_nvme_set_keys", 00:18:07.615 "req_id": 1 00:18:07.615 } 00:18:07.615 Got JSON-RPC error response 00:18:07.615 response: 00:18:07.615 { 00:18:07.615 "code": -13, 00:18:07.615 "message": "Permission denied" 00:18:07.615 } 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:07.615 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.875 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:07.875 14:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:08.818 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:08.818 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:08.818 14:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2697999 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2697999 ']' 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2697999 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2697999 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2697999' 00:18:09.086 killing process with pid 2697999 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2697999 00:18:09.086 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2697999 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:09.346 rmmod nvme_tcp 00:18:09.346 rmmod nvme_fabrics 00:18:09.346 rmmod nvme_keyring 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2723407 ']' 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2723407 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2723407 ']' 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2723407 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2723407 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2723407' 00:18:09.346 killing process with pid 2723407 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2723407 00:18:09.346 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2723407 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:09.607 14:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.524 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:11.524 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.eLZ /tmp/spdk.key-sha256.Kyn /tmp/spdk.key-sha384.FTr /tmp/spdk.key-sha512.Sd5 /tmp/spdk.key-sha512.97R /tmp/spdk.key-sha384.ssA /tmp/spdk.key-sha256.6Sq '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:11.524 00:18:11.524 real 2m31.987s 00:18:11.524 user 5m43.587s 00:18:11.524 sys 0m21.808s 00:18:11.524 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.524 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.524 ************************************ 00:18:11.524 END TEST nvmf_auth_target 00:18:11.524 ************************************ 00:18:11.524 14:08:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:11.525 14:08:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.525 14:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:11.525 14:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.525 14:08:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:11.525 ************************************ 00:18:11.525 START TEST nvmf_bdevio_no_huge 00:18:11.525 ************************************ 00:18:11.525 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:11.787 * Looking for test storage... 00:18:11.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.787 14:08:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:11.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.787 --rc genhtml_branch_coverage=1 00:18:11.787 --rc genhtml_function_coverage=1 00:18:11.787 --rc genhtml_legend=1 00:18:11.787 --rc geninfo_all_blocks=1 00:18:11.787 --rc geninfo_unexecuted_blocks=1 00:18:11.787 00:18:11.787 ' 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:11.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.787 --rc genhtml_branch_coverage=1 00:18:11.787 --rc genhtml_function_coverage=1 00:18:11.787 --rc genhtml_legend=1 00:18:11.787 --rc geninfo_all_blocks=1 00:18:11.787 --rc geninfo_unexecuted_blocks=1 00:18:11.787 00:18:11.787 ' 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:11.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.787 --rc genhtml_branch_coverage=1 00:18:11.787 --rc genhtml_function_coverage=1 00:18:11.787 --rc genhtml_legend=1 00:18:11.787 --rc geninfo_all_blocks=1 00:18:11.787 --rc geninfo_unexecuted_blocks=1 00:18:11.787 00:18:11.787 ' 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:11.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.787 --rc genhtml_branch_coverage=1 00:18:11.787 --rc genhtml_function_coverage=1 00:18:11.787 --rc genhtml_legend=1 00:18:11.787 --rc geninfo_all_blocks=1 00:18:11.787 --rc geninfo_unexecuted_blocks=1 00:18:11.787 00:18:11.787 ' 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.787 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:11.788 14:08:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.043 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:20.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:20.044 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:20.044 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:20.044 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:20.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:18:20.044 00:18:20.044 --- 10.0.0.2 ping statistics --- 00:18:20.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.044 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:20.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:18:20.044 00:18:20.044 --- 10.0.0.1 ping statistics --- 00:18:20.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.044 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2731454 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2731454 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2731454 ']' 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.044 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.045 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.045 14:08:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.045 [2024-12-05 14:08:25.557293] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:18:20.045 [2024-12-05 14:08:25.557382] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:20.045 [2024-12-05 14:08:25.666687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:20.045 [2024-12-05 14:08:25.727469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:20.045 [2024-12-05 14:08:25.727516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:20.045 [2024-12-05 14:08:25.727528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:20.045 [2024-12-05 14:08:25.727536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:20.045 [2024-12-05 14:08:25.727542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:20.045 [2024-12-05 14:08:25.729359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:20.045 [2024-12-05 14:08:25.729518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:20.045 [2024-12-05 14:08:25.729676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:20.045 [2024-12-05 14:08:25.729781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.305 [2024-12-05 14:08:26.434900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.305 Malloc0 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:20.305 [2024-12-05 14:08:26.488899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:20.305 { 00:18:20.305 "params": { 00:18:20.305 "name": "Nvme$subsystem", 00:18:20.305 "trtype": "$TEST_TRANSPORT", 00:18:20.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.305 "adrfam": "ipv4", 00:18:20.305 "trsvcid": "$NVMF_PORT", 00:18:20.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.305 "hdgst": ${hdgst:-false}, 00:18:20.305 "ddgst": ${ddgst:-false} 00:18:20.305 }, 00:18:20.305 "method": "bdev_nvme_attach_controller" 00:18:20.305 } 00:18:20.305 EOF 00:18:20.305 )") 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:20.305 14:08:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:20.305 "params": { 00:18:20.305 "name": "Nvme1", 00:18:20.305 "trtype": "tcp", 00:18:20.305 "traddr": "10.0.0.2", 00:18:20.305 "adrfam": "ipv4", 00:18:20.305 "trsvcid": "4420", 00:18:20.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.305 "hdgst": false, 00:18:20.305 "ddgst": false 00:18:20.305 }, 00:18:20.305 "method": "bdev_nvme_attach_controller" 00:18:20.305 }' 00:18:20.305 [2024-12-05 14:08:26.548295] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:18:20.305 [2024-12-05 14:08:26.548369] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2731797 ] 00:18:20.566 [2024-12-05 14:08:26.644666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:20.566 [2024-12-05 14:08:26.704507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.566 [2024-12-05 14:08:26.704601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.566 [2024-12-05 14:08:26.704603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.826 I/O targets: 00:18:20.826 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:20.826 00:18:20.826 00:18:20.826 CUnit - A unit testing framework for C - Version 2.1-3 00:18:20.826 http://cunit.sourceforge.net/ 00:18:20.826 00:18:20.826 00:18:20.826 Suite: bdevio tests on: Nvme1n1 00:18:21.087 Test: blockdev write read block ...passed 00:18:21.087 Test: blockdev write zeroes read block ...passed 00:18:21.087 Test: blockdev write zeroes read no split ...passed 00:18:21.087 Test: blockdev write zeroes read split ...passed 00:18:21.087 Test: blockdev write zeroes read split partial ...passed 00:18:21.087 Test: blockdev reset ...[2024-12-05 14:08:27.274163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:21.087 [2024-12-05 14:08:27.274264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1746810 (9): Bad file descriptor 00:18:21.087 [2024-12-05 14:08:27.328770] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:21.087 passed 00:18:21.087 Test: blockdev write read 8 blocks ...passed 00:18:21.087 Test: blockdev write read size > 128k ...passed 00:18:21.087 Test: blockdev write read invalid size ...passed 00:18:21.087 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:21.087 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:21.087 Test: blockdev write read max offset ...passed 00:18:21.347 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:21.348 Test: blockdev writev readv 8 blocks ...passed 00:18:21.348 Test: blockdev writev readv 30 x 1block ...passed 00:18:21.348 Test: blockdev writev readv block ...passed 00:18:21.348 Test: blockdev writev readv size > 128k ...passed 00:18:21.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:21.348 Test: blockdev comparev and writev ...[2024-12-05 14:08:27.547861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.547894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.547910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.547918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.548228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.548239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.548253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.548261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.548549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.548560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.548574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.548582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.548877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.548887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.548902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:21.348 [2024-12-05 14:08:27.548911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:21.348 passed 00:18:21.348 Test: blockdev nvme passthru rw ...passed 00:18:21.348 Test: blockdev nvme passthru vendor specific ...[2024-12-05 14:08:27.630956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:21.348 [2024-12-05 14:08:27.630971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.631183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:21.348 [2024-12-05 14:08:27.631193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.631380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:21.348 [2024-12-05 14:08:27.631390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:21.348 [2024-12-05 14:08:27.631618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:21.348 [2024-12-05 14:08:27.631629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:21.348 passed 00:18:21.348 Test: blockdev nvme admin passthru ...passed 00:18:21.608 Test: blockdev copy ...passed 00:18:21.608 00:18:21.608 Run Summary: Type Total Ran Passed Failed Inactive 00:18:21.608 suites 1 1 n/a 0 0 00:18:21.609 tests 23 23 23 0 0 00:18:21.609 asserts 152 152 152 0 n/a 00:18:21.609 00:18:21.609 Elapsed time = 1.198 seconds 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.869 14:08:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:21.869 rmmod nvme_tcp 00:18:21.869 rmmod nvme_fabrics 00:18:21.869 rmmod nvme_keyring 00:18:21.869 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.869 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:21.869 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2731454 ']' 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2731454 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2731454 ']' 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2731454 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2731454 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2731454' 00:18:21.870 killing process with pid 2731454 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2731454 00:18:21.870 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2731454 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.130 14:08:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:24.678 00:18:24.678 real 0m12.586s 00:18:24.678 user 0m15.107s 00:18:24.678 sys 0m6.614s 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:24.678 ************************************ 00:18:24.678 END TEST nvmf_bdevio_no_huge 00:18:24.678 ************************************ 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.678 ************************************ 00:18:24.678 START TEST nvmf_tls 00:18:24.678 ************************************ 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:24.678 * Looking for test storage... 00:18:24.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.678 --rc genhtml_branch_coverage=1 00:18:24.678 --rc genhtml_function_coverage=1 00:18:24.678 --rc genhtml_legend=1 00:18:24.678 --rc geninfo_all_blocks=1 00:18:24.678 --rc geninfo_unexecuted_blocks=1 00:18:24.678 00:18:24.678 ' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.678 --rc genhtml_branch_coverage=1 00:18:24.678 --rc genhtml_function_coverage=1 00:18:24.678 --rc genhtml_legend=1 00:18:24.678 --rc geninfo_all_blocks=1 00:18:24.678 --rc geninfo_unexecuted_blocks=1 00:18:24.678 00:18:24.678 ' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.678 --rc genhtml_branch_coverage=1 00:18:24.678 --rc genhtml_function_coverage=1 00:18:24.678 --rc genhtml_legend=1 00:18:24.678 --rc geninfo_all_blocks=1 00:18:24.678 --rc geninfo_unexecuted_blocks=1 00:18:24.678 00:18:24.678 ' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:24.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.678 --rc genhtml_branch_coverage=1 00:18:24.678 --rc genhtml_function_coverage=1 00:18:24.678 --rc genhtml_legend=1 00:18:24.678 --rc geninfo_all_blocks=1 00:18:24.678 --rc geninfo_unexecuted_blocks=1 00:18:24.678 00:18:24.678 ' 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.678 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:24.679 14:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:32.814 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:32.814 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:32.814 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:32.814 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.814 14:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:18:32.814 00:18:32.814 --- 10.0.0.2 ping statistics --- 00:18:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.814 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:18:32.814 00:18:32.814 --- 10.0.0.1 ping statistics --- 00:18:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.814 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.814 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2736167 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2736167 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2736167 ']' 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.815 14:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.815 [2024-12-05 14:08:38.264245] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:18:32.815 [2024-12-05 14:08:38.264318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.815 [2024-12-05 14:08:38.365675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.815 [2024-12-05 14:08:38.416126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.815 [2024-12-05 14:08:38.416177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.815 [2024-12-05 14:08:38.416185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.815 [2024-12-05 14:08:38.416192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.815 [2024-12-05 14:08:38.416198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.815 [2024-12-05 14:08:38.416969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.815 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.815 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.815 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.815 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.815 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.075 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.075 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:33.075 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:33.075 true 00:18:33.075 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.075 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:33.336 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:33.336 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:33.336 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:33.596 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.596 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:33.596 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:33.596 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:33.596 14:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:33.857 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:33.857 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:34.118 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:34.118 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:34.118 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.118 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:34.379 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:34.379 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:34.379 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:34.379 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.379 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:34.640 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:34.640 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:34.640 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:34.640 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:34.640 14:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:34.901 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:34.902 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.zb8TpikKyg 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.5OcaItJg6R 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.zb8TpikKyg 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.5OcaItJg6R 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:35.164 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:35.425 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.zb8TpikKyg 00:18:35.425 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zb8TpikKyg 00:18:35.425 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.687 [2024-12-05 14:08:41.767666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.687 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.687 14:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:35.949 [2024-12-05 14:08:42.088437] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.949 [2024-12-05 14:08:42.088640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.950 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:36.211 malloc0 00:18:36.211 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:36.211 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zb8TpikKyg 00:18:36.471 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.471 14:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.zb8TpikKyg 00:18:48.702 Initializing NVMe Controllers 00:18:48.702 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:48.702 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:48.702 Initialization complete. Launching workers. 00:18:48.702 ======================================================== 00:18:48.702 Latency(us) 00:18:48.702 Device Information : IOPS MiB/s Average min max 00:18:48.702 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18671.46 72.94 3427.88 1049.92 3997.06 00:18:48.702 ======================================================== 00:18:48.702 Total : 18671.46 72.94 3427.88 1049.92 3997.06 00:18:48.702 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zb8TpikKyg 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zb8TpikKyg 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2739199 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2739199 /var/tmp/bdevperf.sock 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2739199 ']' 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.702 14:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 [2024-12-05 14:08:52.931340] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:18:48.703 [2024-12-05 14:08:52.931396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739199 ] 00:18:48.703 [2024-12-05 14:08:53.017301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.703 [2024-12-05 14:08:53.052386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.703 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.703 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.703 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zb8TpikKyg 00:18:48.703 14:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.703 [2024-12-05 14:08:54.053206] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.703 TLSTESTn1 00:18:48.703 14:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:48.703 Running I/O for 10 seconds... 00:18:50.346 5674.00 IOPS, 22.16 MiB/s [2024-12-05T13:08:57.589Z] 5817.50 IOPS, 22.72 MiB/s [2024-12-05T13:08:58.532Z] 5681.00 IOPS, 22.19 MiB/s [2024-12-05T13:08:59.474Z] 5637.75 IOPS, 22.02 MiB/s [2024-12-05T13:09:00.415Z] 5604.40 IOPS, 21.89 MiB/s [2024-12-05T13:09:01.356Z] 5569.17 IOPS, 21.75 MiB/s [2024-12-05T13:09:02.296Z] 5510.14 IOPS, 21.52 MiB/s [2024-12-05T13:09:03.679Z] 5508.75 IOPS, 21.52 MiB/s [2024-12-05T13:09:04.621Z] 5493.33 IOPS, 21.46 MiB/s [2024-12-05T13:09:04.621Z] 5492.70 IOPS, 21.46 MiB/s 00:18:58.321 Latency(us) 00:18:58.321 [2024-12-05T13:09:04.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.321 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.321 Verification LBA range: start 0x0 length 0x2000 00:18:58.321 TLSTESTn1 : 10.22 5384.77 21.03 0.00 0.00 23576.45 5761.71 218453.33 00:18:58.321 [2024-12-05T13:09:04.621Z] =================================================================================================================== 00:18:58.321 [2024-12-05T13:09:04.621Z] Total : 5384.77 21.03 0.00 0.00 23576.45 5761.71 218453.33 00:18:58.321 { 00:18:58.321 "results": [ 00:18:58.321 { 00:18:58.321 "job": "TLSTESTn1", 00:18:58.321 "core_mask": "0x4", 00:18:58.321 "workload": "verify", 00:18:58.321 "status": "finished", 00:18:58.321 "verify_range": { 00:18:58.321 "start": 0, 00:18:58.321 "length": 8192 00:18:58.321 }, 00:18:58.321 "queue_depth": 128, 00:18:58.321 "io_size": 4096, 00:18:58.321 "runtime": 10.223828, 00:18:58.321 "iops": 5384.773687507262, 00:18:58.321 "mibps": 21.034272216825244, 00:18:58.321 "io_failed": 0, 00:18:58.321 "io_timeout": 0, 00:18:58.321 "avg_latency_us": 23576.449915051555, 00:18:58.321 "min_latency_us": 5761.706666666667, 00:18:58.321 "max_latency_us": 218453.33333333334 00:18:58.321 } 00:18:58.321 ], 00:18:58.321 "core_count": 1 00:18:58.321 } 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2739199 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2739199 ']' 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2739199 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2739199 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2739199' 00:18:58.321 killing process with pid 2739199 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2739199 00:18:58.321 Received shutdown signal, test time was about 10.000000 seconds 00:18:58.321 00:18:58.321 Latency(us) 00:18:58.321 [2024-12-05T13:09:04.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.321 [2024-12-05T13:09:04.621Z] =================================================================================================================== 00:18:58.321 [2024-12-05T13:09:04.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.321 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2739199 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OcaItJg6R 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OcaItJg6R 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5OcaItJg6R 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5OcaItJg6R 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2741535 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2741535 /var/tmp/bdevperf.sock 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2741535 ']' 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.583 14:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.583 [2024-12-05 14:09:04.739290] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:18:58.583 [2024-12-05 14:09:04.739347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741535 ] 00:18:58.583 [2024-12-05 14:09:04.824612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.583 [2024-12-05 14:09:04.853458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.523 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.523 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.523 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5OcaItJg6R 00:18:59.523 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:59.784 [2024-12-05 14:09:05.853127] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.784 [2024-12-05 14:09:05.864293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:59.784 [2024-12-05 14:09:05.865152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c01be0 (107): Transport endpoint is not connected 00:18:59.784 [2024-12-05 14:09:05.866147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c01be0 (9): Bad file descriptor 00:18:59.784 [2024-12-05 14:09:05.867150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:59.784 [2024-12-05 14:09:05.867157] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:59.784 [2024-12-05 14:09:05.867162] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:59.784 [2024-12-05 14:09:05.867169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:59.784 request: 00:18:59.784 { 00:18:59.784 "name": "TLSTEST", 00:18:59.784 "trtype": "tcp", 00:18:59.784 "traddr": "10.0.0.2", 00:18:59.784 "adrfam": "ipv4", 00:18:59.784 "trsvcid": "4420", 00:18:59.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:59.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:59.784 "prchk_reftag": false, 00:18:59.784 "prchk_guard": false, 00:18:59.784 "hdgst": false, 00:18:59.784 "ddgst": false, 00:18:59.784 "psk": "key0", 00:18:59.784 "allow_unrecognized_csi": false, 00:18:59.784 "method": "bdev_nvme_attach_controller", 00:18:59.784 "req_id": 1 00:18:59.784 } 00:18:59.784 Got JSON-RPC error response 00:18:59.784 response: 00:18:59.784 { 00:18:59.784 "code": -5, 00:18:59.784 "message": "Input/output error" 00:18:59.784 } 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2741535 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2741535 ']' 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2741535 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2741535 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2741535' 00:18:59.784 killing process with pid 2741535 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2741535 00:18:59.784 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.784 00:18:59.784 Latency(us) 00:18:59.784 [2024-12-05T13:09:06.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.784 [2024-12-05T13:09:06.084Z] =================================================================================================================== 00:18:59.784 [2024-12-05T13:09:06.084Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:59.784 14:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2741535 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zb8TpikKyg 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zb8TpikKyg 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.zb8TpikKyg 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zb8TpikKyg 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2741719 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2741719 /var/tmp/bdevperf.sock 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2741719 ']' 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.784 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.045 [2024-12-05 14:09:06.113420] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:00.045 [2024-12-05 14:09:06.113483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741719 ] 00:19:00.045 [2024-12-05 14:09:06.198967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.045 [2024-12-05 14:09:06.227923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.617 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.878 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:00.878 14:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zb8TpikKyg 00:19:00.878 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:01.139 [2024-12-05 14:09:07.243752] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.139 [2024-12-05 14:09:07.248313] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:01.139 [2024-12-05 14:09:07.248332] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:01.139 [2024-12-05 14:09:07.248353] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:01.139 [2024-12-05 14:09:07.248998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b8be0 (107): Transport endpoint is not connected 00:19:01.139 [2024-12-05 14:09:07.249993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b8be0 (9): Bad file descriptor 00:19:01.139 [2024-12-05 14:09:07.250995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:01.139 [2024-12-05 14:09:07.251008] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:01.139 [2024-12-05 14:09:07.251014] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:01.139 [2024-12-05 14:09:07.251021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:01.139 request: 00:19:01.139 { 00:19:01.139 "name": "TLSTEST", 00:19:01.139 "trtype": "tcp", 00:19:01.139 "traddr": "10.0.0.2", 00:19:01.139 "adrfam": "ipv4", 00:19:01.139 "trsvcid": "4420", 00:19:01.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:01.139 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:01.139 "prchk_reftag": false, 00:19:01.139 "prchk_guard": false, 00:19:01.139 "hdgst": false, 00:19:01.139 "ddgst": false, 00:19:01.139 "psk": "key0", 00:19:01.139 "allow_unrecognized_csi": false, 00:19:01.139 "method": "bdev_nvme_attach_controller", 00:19:01.139 "req_id": 1 00:19:01.139 } 00:19:01.139 Got JSON-RPC error response 00:19:01.139 response: 00:19:01.139 { 00:19:01.139 "code": -5, 00:19:01.139 "message": "Input/output error" 00:19:01.139 } 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2741719 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2741719 ']' 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2741719 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2741719 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2741719' 00:19:01.139 killing process with pid 2741719 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2741719 00:19:01.139 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.139 00:19:01.139 Latency(us) 00:19:01.139 [2024-12-05T13:09:07.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.139 [2024-12-05T13:09:07.439Z] =================================================================================================================== 00:19:01.139 [2024-12-05T13:09:07.439Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2741719 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.139 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zb8TpikKyg 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zb8TpikKyg 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.zb8TpikKyg 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zb8TpikKyg 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2742019 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2742019 /var/tmp/bdevperf.sock 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2742019 ']' 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.402 14:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.402 [2024-12-05 14:09:07.492107] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:01.402 [2024-12-05 14:09:07.492161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742019 ] 00:19:01.402 [2024-12-05 14:09:07.576680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.402 [2024-12-05 14:09:07.605506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.345 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.345 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.345 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zb8TpikKyg 00:19:02.345 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.345 [2024-12-05 14:09:08.625273] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.346 [2024-12-05 14:09:08.630923] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:02.346 [2024-12-05 14:09:08.630942] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:02.346 [2024-12-05 14:09:08.630962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:02.346 [2024-12-05 14:09:08.631394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b09be0 (107): Transport endpoint is not connected 00:19:02.346 [2024-12-05 14:09:08.632389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b09be0 (9): Bad file descriptor 00:19:02.346 [2024-12-05 14:09:08.633391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:02.346 [2024-12-05 14:09:08.633399] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:02.346 [2024-12-05 14:09:08.633405] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:02.346 [2024-12-05 14:09:08.633411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:02.346 request: 00:19:02.346 { 00:19:02.346 "name": "TLSTEST", 00:19:02.346 "trtype": "tcp", 00:19:02.346 "traddr": "10.0.0.2", 00:19:02.346 "adrfam": "ipv4", 00:19:02.346 "trsvcid": "4420", 00:19:02.346 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:02.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.346 "prchk_reftag": false, 00:19:02.346 "prchk_guard": false, 00:19:02.346 "hdgst": false, 00:19:02.346 "ddgst": false, 00:19:02.346 "psk": "key0", 00:19:02.346 "allow_unrecognized_csi": false, 00:19:02.346 "method": "bdev_nvme_attach_controller", 00:19:02.346 "req_id": 1 00:19:02.346 } 00:19:02.346 Got JSON-RPC error response 00:19:02.346 response: 00:19:02.346 { 00:19:02.346 "code": -5, 00:19:02.346 "message": "Input/output error" 00:19:02.346 } 00:19:02.607 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2742019 00:19:02.607 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2742019 ']' 00:19:02.607 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2742019 00:19:02.607 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742019 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742019' 00:19:02.608 killing process with pid 2742019 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2742019 00:19:02.608 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.608 00:19:02.608 Latency(us) 00:19:02.608 [2024-12-05T13:09:08.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.608 [2024-12-05T13:09:08.908Z] =================================================================================================================== 00:19:02.608 [2024-12-05T13:09:08.908Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2742019 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2742432 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2742432 /var/tmp/bdevperf.sock 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2742432 ']' 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.608 14:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.608 [2024-12-05 14:09:08.882595] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:02.608 [2024-12-05 14:09:08.882651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742432 ] 00:19:02.869 [2024-12-05 14:09:08.967352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.869 [2024-12-05 14:09:08.995552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.441 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.441 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.441 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:03.701 [2024-12-05 14:09:09.839003] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:03.701 [2024-12-05 14:09:09.839030] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:03.701 request: 00:19:03.701 { 00:19:03.701 "name": "key0", 00:19:03.701 "path": "", 00:19:03.701 "method": "keyring_file_add_key", 00:19:03.701 "req_id": 1 00:19:03.701 } 00:19:03.701 Got JSON-RPC error response 00:19:03.701 response: 00:19:03.701 { 00:19:03.701 "code": -1, 00:19:03.701 "message": "Operation not permitted" 00:19:03.701 } 00:19:03.701 14:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:03.964 [2024-12-05 14:09:10.023559] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.964 [2024-12-05 14:09:10.023593] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:03.964 request: 00:19:03.964 { 00:19:03.964 "name": "TLSTEST", 00:19:03.964 "trtype": "tcp", 00:19:03.964 "traddr": "10.0.0.2", 00:19:03.964 "adrfam": "ipv4", 00:19:03.964 "trsvcid": "4420", 00:19:03.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:03.964 "prchk_reftag": false, 00:19:03.964 "prchk_guard": false, 00:19:03.964 "hdgst": false, 00:19:03.964 "ddgst": false, 00:19:03.964 "psk": "key0", 00:19:03.964 "allow_unrecognized_csi": false, 00:19:03.964 "method": "bdev_nvme_attach_controller", 00:19:03.964 "req_id": 1 00:19:03.964 } 00:19:03.964 Got JSON-RPC error response 00:19:03.964 response: 00:19:03.964 { 00:19:03.964 "code": -126, 00:19:03.964 "message": "Required key not available" 00:19:03.964 } 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2742432 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2742432 ']' 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2742432 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2742432 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2742432' 00:19:03.964 killing process with pid 2742432 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2742432 00:19:03.964 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.964 00:19:03.964 Latency(us) 00:19:03.964 [2024-12-05T13:09:10.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.964 [2024-12-05T13:09:10.264Z] =================================================================================================================== 00:19:03.964 [2024-12-05T13:09:10.264Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2742432 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2736167 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2736167 ']' 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2736167 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.964 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2736167 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2736167' 00:19:04.225 killing process with pid 2736167 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2736167 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2736167 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.H4yWLgwwNm 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.H4yWLgwwNm 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2743081 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2743081 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2743081 ']' 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.225 14:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.225 [2024-12-05 14:09:10.515740] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:04.225 [2024-12-05 14:09:10.515830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.486 [2024-12-05 14:09:10.609551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.486 [2024-12-05 14:09:10.647562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.486 [2024-12-05 14:09:10.647595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.486 [2024-12-05 14:09:10.647601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.487 [2024-12-05 14:09:10.647607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.487 [2024-12-05 14:09:10.647611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.487 [2024-12-05 14:09:10.648161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.H4yWLgwwNm 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H4yWLgwwNm 00:19:05.060 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.322 [2024-12-05 14:09:11.502049] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.322 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.583 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.583 [2024-12-05 14:09:11.838877] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.583 [2024-12-05 14:09:11.839062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.583 14:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.844 malloc0 00:19:05.844 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:06.105 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:06.105 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4yWLgwwNm 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H4yWLgwwNm 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2743541 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2743541 /var/tmp/bdevperf.sock 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2743541 ']' 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.365 14:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.365 [2024-12-05 14:09:12.565963] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:06.365 [2024-12-05 14:09:12.566013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743541 ] 00:19:06.365 [2024-12-05 14:09:12.646983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.627 [2024-12-05 14:09:12.675871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.199 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.199 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.199 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:07.460 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.460 [2024-12-05 14:09:13.667543] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.460 TLSTESTn1 00:19:07.720 14:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:07.720 Running I/O for 10 seconds... 00:19:09.599 5563.00 IOPS, 21.73 MiB/s [2024-12-05T13:09:17.287Z] 5683.00 IOPS, 22.20 MiB/s [2024-12-05T13:09:18.229Z] 5785.00 IOPS, 22.60 MiB/s [2024-12-05T13:09:19.172Z] 5431.00 IOPS, 21.21 MiB/s [2024-12-05T13:09:20.110Z] 5405.40 IOPS, 21.11 MiB/s [2024-12-05T13:09:21.052Z] 5524.67 IOPS, 21.58 MiB/s [2024-12-05T13:09:21.995Z] 5529.86 IOPS, 21.60 MiB/s [2024-12-05T13:09:22.936Z] 5575.25 IOPS, 21.78 MiB/s [2024-12-05T13:09:23.877Z] 5533.44 IOPS, 21.62 MiB/s [2024-12-05T13:09:24.136Z] 5612.80 IOPS, 21.93 MiB/s 00:19:17.836 Latency(us) 00:19:17.836 [2024-12-05T13:09:24.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.836 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:17.836 Verification LBA range: start 0x0 length 0x2000 00:19:17.836 TLSTESTn1 : 10.02 5616.25 21.94 0.00 0.00 22756.42 6471.68 31457.28 00:19:17.836 [2024-12-05T13:09:24.136Z] =================================================================================================================== 00:19:17.836 [2024-12-05T13:09:24.136Z] Total : 5616.25 21.94 0.00 0.00 22756.42 6471.68 31457.28 00:19:17.836 { 00:19:17.836 "results": [ 00:19:17.836 { 00:19:17.836 "job": "TLSTESTn1", 00:19:17.836 "core_mask": "0x4", 00:19:17.836 "workload": "verify", 00:19:17.836 "status": "finished", 00:19:17.836 "verify_range": { 00:19:17.836 "start": 0, 00:19:17.836 "length": 8192 00:19:17.836 }, 00:19:17.836 "queue_depth": 128, 00:19:17.836 "io_size": 4096, 00:19:17.836 "runtime": 10.016116, 00:19:17.836 "iops": 5616.2488533479445, 00:19:17.836 "mibps": 21.93847208339041, 00:19:17.836 "io_failed": 0, 00:19:17.836 "io_timeout": 0, 00:19:17.836 "avg_latency_us": 22756.41597307403, 00:19:17.836 "min_latency_us": 6471.68, 00:19:17.836 "max_latency_us": 31457.28 00:19:17.836 } 00:19:17.836 ], 00:19:17.836 "core_count": 1 00:19:17.836 } 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2743541 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2743541 ']' 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2743541 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2743541 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2743541' 00:19:17.836 killing process with pid 2743541 00:19:17.836 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2743541 00:19:17.836 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.836 00:19:17.837 Latency(us) 00:19:17.837 [2024-12-05T13:09:24.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.837 [2024-12-05T13:09:24.137Z] =================================================================================================================== 00:19:17.837 [2024-12-05T13:09:24.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.837 14:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2743541 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.H4yWLgwwNm 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4yWLgwwNm 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4yWLgwwNm 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.H4yWLgwwNm 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.H4yWLgwwNm 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2745882 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2745882 /var/tmp/bdevperf.sock 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2745882 ']' 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.837 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.837 [2024-12-05 14:09:24.118496] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:17.837 [2024-12-05 14:09:24.118540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745882 ] 00:19:18.096 [2024-12-05 14:09:24.167871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.096 [2024-12-05 14:09:24.196488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.096 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.096 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.096 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:18.355 [2024-12-05 14:09:24.406382] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H4yWLgwwNm': 0100666 00:19:18.355 [2024-12-05 14:09:24.406402] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:18.355 request: 00:19:18.355 { 00:19:18.355 "name": "key0", 00:19:18.355 "path": "/tmp/tmp.H4yWLgwwNm", 00:19:18.355 "method": "keyring_file_add_key", 00:19:18.355 "req_id": 1 00:19:18.355 } 00:19:18.355 Got JSON-RPC error response 00:19:18.355 response: 00:19:18.355 { 00:19:18.355 "code": -1, 00:19:18.355 "message": "Operation not permitted" 00:19:18.355 } 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.355 [2024-12-05 14:09:24.558831] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.355 [2024-12-05 14:09:24.558854] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:18.355 request: 00:19:18.355 { 00:19:18.355 "name": "TLSTEST", 00:19:18.355 "trtype": "tcp", 00:19:18.355 "traddr": "10.0.0.2", 00:19:18.355 "adrfam": "ipv4", 00:19:18.355 "trsvcid": "4420", 00:19:18.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.355 "prchk_reftag": false, 00:19:18.355 "prchk_guard": false, 00:19:18.355 "hdgst": false, 00:19:18.355 "ddgst": false, 00:19:18.355 "psk": "key0", 00:19:18.355 "allow_unrecognized_csi": false, 00:19:18.355 "method": "bdev_nvme_attach_controller", 00:19:18.355 "req_id": 1 00:19:18.355 } 00:19:18.355 Got JSON-RPC error response 00:19:18.355 response: 00:19:18.355 { 00:19:18.355 "code": -126, 00:19:18.355 "message": "Required key not available" 00:19:18.355 } 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2745882 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2745882 ']' 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2745882 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2745882 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.355 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2745882' 00:19:18.355 killing process with pid 2745882 00:19:18.356 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2745882 00:19:18.356 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.356 00:19:18.356 Latency(us) 00:19:18.356 [2024-12-05T13:09:24.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.356 [2024-12-05T13:09:24.656Z] =================================================================================================================== 00:19:18.356 [2024-12-05T13:09:24.656Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.356 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2745882 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2743081 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2743081 ']' 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2743081 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2743081 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2743081' 00:19:18.615 killing process with pid 2743081 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2743081 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2743081 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2745908 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2745908 00:19:18.615 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2745908 ']' 00:19:18.903 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.903 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.903 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.903 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.903 14:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.903 [2024-12-05 14:09:24.941666] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:18.903 [2024-12-05 14:09:24.941706] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.903 [2024-12-05 14:09:24.994933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.903 [2024-12-05 14:09:25.023036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.903 [2024-12-05 14:09:25.023062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.903 [2024-12-05 14:09:25.023068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.903 [2024-12-05 14:09:25.023073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.903 [2024-12-05 14:09:25.023077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.903 [2024-12-05 14:09:25.023516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.H4yWLgwwNm 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.H4yWLgwwNm 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:18.903 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.904 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:18.904 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:18.904 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.H4yWLgwwNm 00:19:18.904 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H4yWLgwwNm 00:19:18.904 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.214 [2024-12-05 14:09:25.294265] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.214 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:19.214 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:19.516 [2024-12-05 14:09:25.627083] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.516 [2024-12-05 14:09:25.627272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.516 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:19.516 malloc0 00:19:19.776 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:19.776 14:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:20.036 [2024-12-05 14:09:26.114138] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.H4yWLgwwNm': 0100666 00:19:20.036 [2024-12-05 14:09:26.114158] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:20.036 request: 00:19:20.036 { 00:19:20.036 "name": "key0", 00:19:20.036 "path": "/tmp/tmp.H4yWLgwwNm", 00:19:20.036 "method": "keyring_file_add_key", 00:19:20.036 "req_id": 1 00:19:20.036 } 00:19:20.036 Got JSON-RPC error response 00:19:20.036 response: 00:19:20.036 { 00:19:20.036 "code": -1, 00:19:20.036 "message": "Operation not permitted" 00:19:20.036 } 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.036 [2024-12-05 14:09:26.282578] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:20.036 [2024-12-05 14:09:26.282601] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:20.036 request: 00:19:20.036 { 00:19:20.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.036 "host": "nqn.2016-06.io.spdk:host1", 00:19:20.036 "psk": "key0", 00:19:20.036 "method": "nvmf_subsystem_add_host", 00:19:20.036 "req_id": 1 00:19:20.036 } 00:19:20.036 Got JSON-RPC error response 00:19:20.036 response: 00:19:20.036 { 00:19:20.036 "code": -32603, 00:19:20.036 "message": "Internal error" 00:19:20.036 } 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2745908 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2745908 ']' 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2745908 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.036 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2745908 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2745908' 00:19:20.295 killing process with pid 2745908 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2745908 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2745908 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.H4yWLgwwNm 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2746280 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2746280 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2746280 ']' 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.295 14:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.295 [2024-12-05 14:09:26.546574] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:20.295 [2024-12-05 14:09:26.546624] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.554 [2024-12-05 14:09:26.627613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.554 [2024-12-05 14:09:26.655833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.554 [2024-12-05 14:09:26.655859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.554 [2024-12-05 14:09:26.655864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.554 [2024-12-05 14:09:26.655869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.554 [2024-12-05 14:09:26.655874] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.554 [2024-12-05 14:09:26.656317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.H4yWLgwwNm 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H4yWLgwwNm 00:19:21.123 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:21.382 [2024-12-05 14:09:27.508251] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.382 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.642 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.642 [2024-12-05 14:09:27.825027] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.642 [2024-12-05 14:09:27.825216] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.642 14:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.901 malloc0 00:19:21.901 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.901 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:22.160 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2746646 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2746646 /var/tmp/bdevperf.sock 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2746646 ']' 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.420 14:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.420 [2024-12-05 14:09:28.551603] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:22.420 [2024-12-05 14:09:28.551656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746646 ] 00:19:22.420 [2024-12-05 14:09:28.616639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.420 [2024-12-05 14:09:28.646360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.360 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.360 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:23.360 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:23.360 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:23.620 [2024-12-05 14:09:29.666301] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.620 TLSTESTn1 00:19:23.620 14:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:23.883 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:23.883 "subsystems": [ 00:19:23.883 { 00:19:23.883 "subsystem": "keyring", 00:19:23.883 "config": [ 00:19:23.883 { 00:19:23.883 "method": "keyring_file_add_key", 00:19:23.883 "params": { 00:19:23.883 "name": "key0", 00:19:23.883 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:23.883 } 00:19:23.883 } 00:19:23.883 ] 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "subsystem": "iobuf", 00:19:23.883 "config": [ 00:19:23.883 { 00:19:23.883 "method": "iobuf_set_options", 00:19:23.883 "params": { 00:19:23.883 "small_pool_count": 8192, 00:19:23.883 "large_pool_count": 1024, 00:19:23.883 "small_bufsize": 8192, 00:19:23.883 "large_bufsize": 135168, 00:19:23.883 "enable_numa": false 00:19:23.883 } 00:19:23.883 } 00:19:23.883 ] 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "subsystem": "sock", 00:19:23.883 "config": [ 00:19:23.883 { 00:19:23.883 "method": "sock_set_default_impl", 00:19:23.883 "params": { 00:19:23.883 "impl_name": "posix" 00:19:23.883 } 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "method": "sock_impl_set_options", 00:19:23.883 "params": { 00:19:23.883 "impl_name": "ssl", 00:19:23.883 "recv_buf_size": 4096, 00:19:23.883 "send_buf_size": 4096, 00:19:23.883 "enable_recv_pipe": true, 00:19:23.883 "enable_quickack": false, 00:19:23.883 "enable_placement_id": 0, 00:19:23.883 "enable_zerocopy_send_server": true, 00:19:23.883 "enable_zerocopy_send_client": false, 00:19:23.883 "zerocopy_threshold": 0, 00:19:23.883 "tls_version": 0, 00:19:23.883 "enable_ktls": false 00:19:23.883 } 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "method": "sock_impl_set_options", 00:19:23.883 "params": { 00:19:23.883 "impl_name": "posix", 00:19:23.883 "recv_buf_size": 2097152, 00:19:23.883 "send_buf_size": 2097152, 00:19:23.883 "enable_recv_pipe": true, 00:19:23.883 "enable_quickack": false, 00:19:23.883 "enable_placement_id": 0, 00:19:23.883 "enable_zerocopy_send_server": true, 00:19:23.883 "enable_zerocopy_send_client": false, 00:19:23.883 "zerocopy_threshold": 0, 00:19:23.883 "tls_version": 0, 00:19:23.883 "enable_ktls": false 00:19:23.883 } 00:19:23.883 } 00:19:23.883 ] 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "subsystem": "vmd", 00:19:23.883 "config": [] 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "subsystem": "accel", 00:19:23.883 "config": [ 00:19:23.883 { 00:19:23.883 "method": "accel_set_options", 00:19:23.883 "params": { 00:19:23.883 "small_cache_size": 128, 00:19:23.883 "large_cache_size": 16, 00:19:23.883 "task_count": 2048, 00:19:23.883 "sequence_count": 2048, 00:19:23.883 "buf_count": 2048 00:19:23.883 } 00:19:23.883 } 00:19:23.883 ] 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "subsystem": "bdev", 00:19:23.883 "config": [ 00:19:23.883 { 00:19:23.883 "method": "bdev_set_options", 00:19:23.883 "params": { 00:19:23.883 "bdev_io_pool_size": 65535, 00:19:23.883 "bdev_io_cache_size": 256, 00:19:23.883 "bdev_auto_examine": true, 00:19:23.883 "iobuf_small_cache_size": 128, 00:19:23.883 "iobuf_large_cache_size": 16 00:19:23.883 } 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "method": "bdev_raid_set_options", 00:19:23.883 "params": { 00:19:23.883 "process_window_size_kb": 1024, 00:19:23.883 "process_max_bandwidth_mb_sec": 0 00:19:23.883 } 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "method": "bdev_iscsi_set_options", 00:19:23.883 "params": { 00:19:23.883 "timeout_sec": 30 00:19:23.883 } 00:19:23.883 }, 00:19:23.883 { 00:19:23.883 "method": "bdev_nvme_set_options", 00:19:23.883 "params": { 00:19:23.883 "action_on_timeout": "none", 00:19:23.883 "timeout_us": 0, 00:19:23.883 "timeout_admin_us": 0, 00:19:23.883 "keep_alive_timeout_ms": 10000, 00:19:23.883 "arbitration_burst": 0, 00:19:23.883 "low_priority_weight": 0, 00:19:23.883 "medium_priority_weight": 0, 00:19:23.883 "high_priority_weight": 0, 00:19:23.883 "nvme_adminq_poll_period_us": 10000, 00:19:23.883 "nvme_ioq_poll_period_us": 0, 00:19:23.883 "io_queue_requests": 0, 00:19:23.883 "delay_cmd_submit": true, 00:19:23.883 "transport_retry_count": 4, 00:19:23.883 "bdev_retry_count": 3, 00:19:23.883 "transport_ack_timeout": 0, 00:19:23.883 "ctrlr_loss_timeout_sec": 0, 00:19:23.883 "reconnect_delay_sec": 0, 00:19:23.883 "fast_io_fail_timeout_sec": 0, 00:19:23.883 "disable_auto_failback": false, 00:19:23.883 "generate_uuids": false, 00:19:23.883 "transport_tos": 0, 00:19:23.883 "nvme_error_stat": false, 00:19:23.883 "rdma_srq_size": 0, 00:19:23.883 "io_path_stat": false, 00:19:23.883 "allow_accel_sequence": false, 00:19:23.883 "rdma_max_cq_size": 0, 00:19:23.883 "rdma_cm_event_timeout_ms": 0, 00:19:23.883 "dhchap_digests": [ 00:19:23.883 "sha256", 00:19:23.884 "sha384", 00:19:23.884 "sha512" 00:19:23.884 ], 00:19:23.884 "dhchap_dhgroups": [ 00:19:23.884 "null", 00:19:23.884 "ffdhe2048", 00:19:23.884 "ffdhe3072", 00:19:23.884 "ffdhe4096", 00:19:23.884 "ffdhe6144", 00:19:23.884 "ffdhe8192" 00:19:23.884 ] 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "bdev_nvme_set_hotplug", 00:19:23.884 "params": { 00:19:23.884 "period_us": 100000, 00:19:23.884 "enable": false 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "bdev_malloc_create", 00:19:23.884 "params": { 00:19:23.884 "name": "malloc0", 00:19:23.884 "num_blocks": 8192, 00:19:23.884 "block_size": 4096, 00:19:23.884 "physical_block_size": 4096, 00:19:23.884 "uuid": "e23b42c4-d0bb-4219-980d-2feb8ba727dc", 00:19:23.884 "optimal_io_boundary": 0, 00:19:23.884 "md_size": 0, 00:19:23.884 "dif_type": 0, 00:19:23.884 "dif_is_head_of_md": false, 00:19:23.884 "dif_pi_format": 0 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "bdev_wait_for_examine" 00:19:23.884 } 00:19:23.884 ] 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "subsystem": "nbd", 00:19:23.884 "config": [] 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "subsystem": "scheduler", 00:19:23.884 "config": [ 00:19:23.884 { 00:19:23.884 "method": "framework_set_scheduler", 00:19:23.884 "params": { 00:19:23.884 "name": "static" 00:19:23.884 } 00:19:23.884 } 00:19:23.884 ] 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "subsystem": "nvmf", 00:19:23.884 "config": [ 00:19:23.884 { 00:19:23.884 "method": "nvmf_set_config", 00:19:23.884 "params": { 00:19:23.884 "discovery_filter": "match_any", 00:19:23.884 "admin_cmd_passthru": { 00:19:23.884 "identify_ctrlr": false 00:19:23.884 }, 00:19:23.884 "dhchap_digests": [ 00:19:23.884 "sha256", 00:19:23.884 "sha384", 00:19:23.884 "sha512" 00:19:23.884 ], 00:19:23.884 "dhchap_dhgroups": [ 00:19:23.884 "null", 00:19:23.884 "ffdhe2048", 00:19:23.884 "ffdhe3072", 00:19:23.884 "ffdhe4096", 00:19:23.884 "ffdhe6144", 00:19:23.884 "ffdhe8192" 00:19:23.884 ] 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_set_max_subsystems", 00:19:23.884 "params": { 00:19:23.884 "max_subsystems": 1024 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_set_crdt", 00:19:23.884 "params": { 00:19:23.884 "crdt1": 0, 00:19:23.884 "crdt2": 0, 00:19:23.884 "crdt3": 0 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_create_transport", 00:19:23.884 "params": { 00:19:23.884 "trtype": "TCP", 00:19:23.884 "max_queue_depth": 128, 00:19:23.884 "max_io_qpairs_per_ctrlr": 127, 00:19:23.884 "in_capsule_data_size": 4096, 00:19:23.884 "max_io_size": 131072, 00:19:23.884 "io_unit_size": 131072, 00:19:23.884 "max_aq_depth": 128, 00:19:23.884 "num_shared_buffers": 511, 00:19:23.884 "buf_cache_size": 4294967295, 00:19:23.884 "dif_insert_or_strip": false, 00:19:23.884 "zcopy": false, 00:19:23.884 "c2h_success": false, 00:19:23.884 "sock_priority": 0, 00:19:23.884 "abort_timeout_sec": 1, 00:19:23.884 "ack_timeout": 0, 00:19:23.884 "data_wr_pool_size": 0 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_create_subsystem", 00:19:23.884 "params": { 00:19:23.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.884 "allow_any_host": false, 00:19:23.884 "serial_number": "SPDK00000000000001", 00:19:23.884 "model_number": "SPDK bdev Controller", 00:19:23.884 "max_namespaces": 10, 00:19:23.884 "min_cntlid": 1, 00:19:23.884 "max_cntlid": 65519, 00:19:23.884 "ana_reporting": false 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_subsystem_add_host", 00:19:23.884 "params": { 00:19:23.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.884 "host": "nqn.2016-06.io.spdk:host1", 00:19:23.884 "psk": "key0" 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_subsystem_add_ns", 00:19:23.884 "params": { 00:19:23.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.884 "namespace": { 00:19:23.884 "nsid": 1, 00:19:23.884 "bdev_name": "malloc0", 00:19:23.884 "nguid": "E23B42C4D0BB4219980D2FEB8BA727DC", 00:19:23.884 "uuid": "e23b42c4-d0bb-4219-980d-2feb8ba727dc", 00:19:23.884 "no_auto_visible": false 00:19:23.884 } 00:19:23.884 } 00:19:23.884 }, 00:19:23.884 { 00:19:23.884 "method": "nvmf_subsystem_add_listener", 00:19:23.884 "params": { 00:19:23.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.884 "listen_address": { 00:19:23.884 "trtype": "TCP", 00:19:23.884 "adrfam": "IPv4", 00:19:23.884 "traddr": "10.0.0.2", 00:19:23.884 "trsvcid": "4420" 00:19:23.884 }, 00:19:23.884 "secure_channel": true 00:19:23.884 } 00:19:23.884 } 00:19:23.884 ] 00:19:23.884 } 00:19:23.884 ] 00:19:23.884 }' 00:19:23.884 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:24.146 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:24.146 "subsystems": [ 00:19:24.146 { 00:19:24.146 "subsystem": "keyring", 00:19:24.146 "config": [ 00:19:24.146 { 00:19:24.146 "method": "keyring_file_add_key", 00:19:24.146 "params": { 00:19:24.146 "name": "key0", 00:19:24.146 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:24.146 } 00:19:24.146 } 00:19:24.146 ] 00:19:24.146 }, 00:19:24.146 { 00:19:24.146 "subsystem": "iobuf", 00:19:24.146 "config": [ 00:19:24.146 { 00:19:24.146 "method": "iobuf_set_options", 00:19:24.146 "params": { 00:19:24.146 "small_pool_count": 8192, 00:19:24.146 "large_pool_count": 1024, 00:19:24.146 "small_bufsize": 8192, 00:19:24.146 "large_bufsize": 135168, 00:19:24.146 "enable_numa": false 00:19:24.146 } 00:19:24.146 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "sock", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "sock_set_default_impl", 00:19:24.147 "params": { 00:19:24.147 "impl_name": "posix" 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "sock_impl_set_options", 00:19:24.147 "params": { 00:19:24.147 "impl_name": "ssl", 00:19:24.147 "recv_buf_size": 4096, 00:19:24.147 "send_buf_size": 4096, 00:19:24.147 "enable_recv_pipe": true, 00:19:24.147 "enable_quickack": false, 00:19:24.147 "enable_placement_id": 0, 00:19:24.147 "enable_zerocopy_send_server": true, 00:19:24.147 "enable_zerocopy_send_client": false, 00:19:24.147 "zerocopy_threshold": 0, 00:19:24.147 "tls_version": 0, 00:19:24.147 "enable_ktls": false 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "sock_impl_set_options", 00:19:24.147 "params": { 00:19:24.147 "impl_name": "posix", 00:19:24.147 "recv_buf_size": 2097152, 00:19:24.147 "send_buf_size": 2097152, 00:19:24.147 "enable_recv_pipe": true, 00:19:24.147 "enable_quickack": false, 00:19:24.147 "enable_placement_id": 0, 00:19:24.147 "enable_zerocopy_send_server": true, 00:19:24.147 "enable_zerocopy_send_client": false, 00:19:24.147 "zerocopy_threshold": 0, 00:19:24.147 "tls_version": 0, 00:19:24.147 "enable_ktls": false 00:19:24.147 } 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "vmd", 00:19:24.147 "config": [] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "accel", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "accel_set_options", 00:19:24.147 "params": { 00:19:24.147 "small_cache_size": 128, 00:19:24.147 "large_cache_size": 16, 00:19:24.147 "task_count": 2048, 00:19:24.147 "sequence_count": 2048, 00:19:24.147 "buf_count": 2048 00:19:24.147 } 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "bdev", 00:19:24.147 "config": [ 00:19:24.147 { 00:19:24.147 "method": "bdev_set_options", 00:19:24.147 "params": { 00:19:24.147 "bdev_io_pool_size": 65535, 00:19:24.147 "bdev_io_cache_size": 256, 00:19:24.147 "bdev_auto_examine": true, 00:19:24.147 "iobuf_small_cache_size": 128, 00:19:24.147 "iobuf_large_cache_size": 16 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_raid_set_options", 00:19:24.147 "params": { 00:19:24.147 "process_window_size_kb": 1024, 00:19:24.147 "process_max_bandwidth_mb_sec": 0 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_iscsi_set_options", 00:19:24.147 "params": { 00:19:24.147 "timeout_sec": 30 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_nvme_set_options", 00:19:24.147 "params": { 00:19:24.147 "action_on_timeout": "none", 00:19:24.147 "timeout_us": 0, 00:19:24.147 "timeout_admin_us": 0, 00:19:24.147 "keep_alive_timeout_ms": 10000, 00:19:24.147 "arbitration_burst": 0, 00:19:24.147 "low_priority_weight": 0, 00:19:24.147 "medium_priority_weight": 0, 00:19:24.147 "high_priority_weight": 0, 00:19:24.147 "nvme_adminq_poll_period_us": 10000, 00:19:24.147 "nvme_ioq_poll_period_us": 0, 00:19:24.147 "io_queue_requests": 512, 00:19:24.147 "delay_cmd_submit": true, 00:19:24.147 "transport_retry_count": 4, 00:19:24.147 "bdev_retry_count": 3, 00:19:24.147 "transport_ack_timeout": 0, 00:19:24.147 "ctrlr_loss_timeout_sec": 0, 00:19:24.147 "reconnect_delay_sec": 0, 00:19:24.147 "fast_io_fail_timeout_sec": 0, 00:19:24.147 "disable_auto_failback": false, 00:19:24.147 "generate_uuids": false, 00:19:24.147 "transport_tos": 0, 00:19:24.147 "nvme_error_stat": false, 00:19:24.147 "rdma_srq_size": 0, 00:19:24.147 "io_path_stat": false, 00:19:24.147 "allow_accel_sequence": false, 00:19:24.147 "rdma_max_cq_size": 0, 00:19:24.147 "rdma_cm_event_timeout_ms": 0, 00:19:24.147 "dhchap_digests": [ 00:19:24.147 "sha256", 00:19:24.147 "sha384", 00:19:24.147 "sha512" 00:19:24.147 ], 00:19:24.147 "dhchap_dhgroups": [ 00:19:24.147 "null", 00:19:24.147 "ffdhe2048", 00:19:24.147 "ffdhe3072", 00:19:24.147 "ffdhe4096", 00:19:24.147 "ffdhe6144", 00:19:24.147 "ffdhe8192" 00:19:24.147 ] 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_nvme_attach_controller", 00:19:24.147 "params": { 00:19:24.147 "name": "TLSTEST", 00:19:24.147 "trtype": "TCP", 00:19:24.147 "adrfam": "IPv4", 00:19:24.147 "traddr": "10.0.0.2", 00:19:24.147 "trsvcid": "4420", 00:19:24.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.147 "prchk_reftag": false, 00:19:24.147 "prchk_guard": false, 00:19:24.147 "ctrlr_loss_timeout_sec": 0, 00:19:24.147 "reconnect_delay_sec": 0, 00:19:24.147 "fast_io_fail_timeout_sec": 0, 00:19:24.147 "psk": "key0", 00:19:24.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:24.147 "hdgst": false, 00:19:24.147 "ddgst": false, 00:19:24.147 "multipath": "multipath" 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_nvme_set_hotplug", 00:19:24.147 "params": { 00:19:24.147 "period_us": 100000, 00:19:24.147 "enable": false 00:19:24.147 } 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "method": "bdev_wait_for_examine" 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }, 00:19:24.147 { 00:19:24.147 "subsystem": "nbd", 00:19:24.147 "config": [] 00:19:24.147 } 00:19:24.147 ] 00:19:24.147 }' 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2746646 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2746646 ']' 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2746646 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2746646 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2746646' 00:19:24.147 killing process with pid 2746646 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2746646 00:19:24.147 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.147 00:19:24.147 Latency(us) 00:19:24.147 [2024-12-05T13:09:30.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.147 [2024-12-05T13:09:30.447Z] =================================================================================================================== 00:19:24.147 [2024-12-05T13:09:30.447Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2746646 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2746280 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2746280 ']' 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2746280 00:19:24.147 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2746280 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2746280' 00:19:24.410 killing process with pid 2746280 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2746280 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2746280 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.410 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:24.410 "subsystems": [ 00:19:24.410 { 00:19:24.410 "subsystem": "keyring", 00:19:24.410 "config": [ 00:19:24.410 { 00:19:24.410 "method": "keyring_file_add_key", 00:19:24.410 "params": { 00:19:24.410 "name": "key0", 00:19:24.410 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:24.410 } 00:19:24.410 } 00:19:24.410 ] 00:19:24.410 }, 00:19:24.410 { 00:19:24.410 "subsystem": "iobuf", 00:19:24.410 "config": [ 00:19:24.410 { 00:19:24.410 "method": "iobuf_set_options", 00:19:24.410 "params": { 00:19:24.410 "small_pool_count": 8192, 00:19:24.410 "large_pool_count": 1024, 00:19:24.410 "small_bufsize": 8192, 00:19:24.410 "large_bufsize": 135168, 00:19:24.410 "enable_numa": false 00:19:24.410 } 00:19:24.410 } 00:19:24.410 ] 00:19:24.410 }, 00:19:24.410 { 00:19:24.410 "subsystem": "sock", 00:19:24.410 "config": [ 00:19:24.410 { 00:19:24.410 "method": "sock_set_default_impl", 00:19:24.410 "params": { 00:19:24.410 "impl_name": "posix" 00:19:24.410 } 00:19:24.410 }, 00:19:24.410 { 00:19:24.410 "method": "sock_impl_set_options", 00:19:24.410 "params": { 00:19:24.410 "impl_name": "ssl", 00:19:24.410 "recv_buf_size": 4096, 00:19:24.410 "send_buf_size": 4096, 00:19:24.410 "enable_recv_pipe": true, 00:19:24.410 "enable_quickack": false, 00:19:24.410 "enable_placement_id": 0, 00:19:24.410 "enable_zerocopy_send_server": true, 00:19:24.410 "enable_zerocopy_send_client": false, 00:19:24.410 "zerocopy_threshold": 0, 00:19:24.410 "tls_version": 0, 00:19:24.410 "enable_ktls": false 00:19:24.410 } 00:19:24.410 }, 00:19:24.410 { 00:19:24.410 "method": "sock_impl_set_options", 00:19:24.410 "params": { 00:19:24.410 "impl_name": "posix", 00:19:24.410 "recv_buf_size": 2097152, 00:19:24.410 "send_buf_size": 2097152, 00:19:24.410 "enable_recv_pipe": true, 00:19:24.410 "enable_quickack": false, 00:19:24.410 "enable_placement_id": 0, 00:19:24.410 "enable_zerocopy_send_server": true, 00:19:24.411 "enable_zerocopy_send_client": false, 00:19:24.411 "zerocopy_threshold": 0, 00:19:24.411 "tls_version": 0, 00:19:24.411 "enable_ktls": false 00:19:24.411 } 00:19:24.411 } 00:19:24.411 ] 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "subsystem": "vmd", 00:19:24.411 "config": [] 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "subsystem": "accel", 00:19:24.411 "config": [ 00:19:24.411 { 00:19:24.411 "method": "accel_set_options", 00:19:24.411 "params": { 00:19:24.411 "small_cache_size": 128, 00:19:24.411 "large_cache_size": 16, 00:19:24.411 "task_count": 2048, 00:19:24.411 "sequence_count": 2048, 00:19:24.411 "buf_count": 2048 00:19:24.411 } 00:19:24.411 } 00:19:24.411 ] 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "subsystem": "bdev", 00:19:24.411 "config": [ 00:19:24.411 { 00:19:24.411 "method": "bdev_set_options", 00:19:24.411 "params": { 00:19:24.411 "bdev_io_pool_size": 65535, 00:19:24.411 "bdev_io_cache_size": 256, 00:19:24.411 "bdev_auto_examine": true, 00:19:24.411 "iobuf_small_cache_size": 128, 00:19:24.411 "iobuf_large_cache_size": 16 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "bdev_raid_set_options", 00:19:24.411 "params": { 00:19:24.411 "process_window_size_kb": 1024, 00:19:24.411 "process_max_bandwidth_mb_sec": 0 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "bdev_iscsi_set_options", 00:19:24.411 "params": { 00:19:24.411 "timeout_sec": 30 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "bdev_nvme_set_options", 00:19:24.411 "params": { 00:19:24.411 "action_on_timeout": "none", 00:19:24.411 "timeout_us": 0, 00:19:24.411 "timeout_admin_us": 0, 00:19:24.411 "keep_alive_timeout_ms": 10000, 00:19:24.411 "arbitration_burst": 0, 00:19:24.411 "low_priority_weight": 0, 00:19:24.411 "medium_priority_weight": 0, 00:19:24.411 "high_priority_weight": 0, 00:19:24.411 "nvme_adminq_poll_period_us": 10000, 00:19:24.411 "nvme_ioq_poll_period_us": 0, 00:19:24.411 "io_queue_requests": 0, 00:19:24.411 "delay_cmd_submit": true, 00:19:24.411 "transport_retry_count": 4, 00:19:24.411 "bdev_retry_count": 3, 00:19:24.411 "transport_ack_timeout": 0, 00:19:24.411 "ctrlr_loss_timeout_sec": 0, 00:19:24.411 "reconnect_delay_sec": 0, 00:19:24.411 "fast_io_fail_timeout_sec": 0, 00:19:24.411 "disable_auto_failback": false, 00:19:24.411 "generate_uuids": false, 00:19:24.411 "transport_tos": 0, 00:19:24.411 "nvme_error_stat": false, 00:19:24.411 "rdma_srq_size": 0, 00:19:24.411 "io_path_stat": false, 00:19:24.411 "allow_accel_sequence": false, 00:19:24.411 "rdma_max_cq_size": 0, 00:19:24.411 "rdma_cm_event_timeout_ms": 0, 00:19:24.411 "dhchap_digests": [ 00:19:24.411 "sha256", 00:19:24.411 "sha384", 00:19:24.411 "sha512" 00:19:24.411 ], 00:19:24.411 "dhchap_dhgroups": [ 00:19:24.411 "null", 00:19:24.411 "ffdhe2048", 00:19:24.411 "ffdhe3072", 00:19:24.411 "ffdhe4096", 00:19:24.411 "ffdhe6144", 00:19:24.411 "ffdhe8192" 00:19:24.411 ] 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "bdev_nvme_set_hotplug", 00:19:24.411 "params": { 00:19:24.411 "period_us": 100000, 00:19:24.411 "enable": false 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "bdev_malloc_create", 00:19:24.411 "params": { 00:19:24.411 "name": "malloc0", 00:19:24.411 "num_blocks": 8192, 00:19:24.411 "block_size": 4096, 00:19:24.411 "physical_block_size": 4096, 00:19:24.411 "uuid": "e23b42c4-d0bb-4219-980d-2feb8ba727dc", 00:19:24.411 "optimal_io_boundary": 0, 00:19:24.411 "md_size": 0, 00:19:24.411 "dif_type": 0, 00:19:24.411 "dif_is_head_of_md": false, 00:19:24.411 "dif_pi_format": 0 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "bdev_wait_for_examine" 00:19:24.411 } 00:19:24.411 ] 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "subsystem": "nbd", 00:19:24.411 "config": [] 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "subsystem": "scheduler", 00:19:24.411 "config": [ 00:19:24.411 { 00:19:24.411 "method": "framework_set_scheduler", 00:19:24.411 "params": { 00:19:24.411 "name": "static" 00:19:24.411 } 00:19:24.411 } 00:19:24.411 ] 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "subsystem": "nvmf", 00:19:24.411 "config": [ 00:19:24.411 { 00:19:24.411 "method": "nvmf_set_config", 00:19:24.411 "params": { 00:19:24.411 "discovery_filter": "match_any", 00:19:24.411 "admin_cmd_passthru": { 00:19:24.411 "identify_ctrlr": false 00:19:24.411 }, 00:19:24.411 "dhchap_digests": [ 00:19:24.411 "sha256", 00:19:24.411 "sha384", 00:19:24.411 "sha512" 00:19:24.411 ], 00:19:24.411 "dhchap_dhgroups": [ 00:19:24.411 "null", 00:19:24.411 "ffdhe2048", 00:19:24.411 "ffdhe3072", 00:19:24.411 "ffdhe4096", 00:19:24.411 "ffdhe6144", 00:19:24.411 "ffdhe8192" 00:19:24.411 ] 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_set_max_subsystems", 00:19:24.411 "params": { 00:19:24.411 "max_subsystems": 1024 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_set_crdt", 00:19:24.411 "params": { 00:19:24.411 "crdt1": 0, 00:19:24.411 "crdt2": 0, 00:19:24.411 "crdt3": 0 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_create_transport", 00:19:24.411 "params": { 00:19:24.411 "trtype": "TCP", 00:19:24.411 "max_queue_depth": 128, 00:19:24.411 "max_io_qpairs_per_ctrlr": 127, 00:19:24.411 "in_capsule_data_size": 4096, 00:19:24.411 "max_io_size": 131072, 00:19:24.411 "io_unit_size": 131072, 00:19:24.411 "max_aq_depth": 128, 00:19:24.411 "num_shared_buffers": 511, 00:19:24.411 "buf_cache_size": 4294967295, 00:19:24.411 "dif_insert_or_strip": false, 00:19:24.411 "zcopy": false, 00:19:24.411 "c2h_success": false, 00:19:24.411 "sock_priority": 0, 00:19:24.411 "abort_timeout_sec": 1, 00:19:24.411 "ack_timeout": 0, 00:19:24.411 "data_wr_pool_size": 0 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_create_subsystem", 00:19:24.411 "params": { 00:19:24.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.411 "allow_any_host": false, 00:19:24.411 "serial_number": "SPDK00000000000001", 00:19:24.411 "model_number": "SPDK bdev Controller", 00:19:24.411 "max_namespaces": 10, 00:19:24.411 "min_cntlid": 1, 00:19:24.411 "max_cntlid": 65519, 00:19:24.411 "ana_reporting": false 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_subsystem_add_host", 00:19:24.411 "params": { 00:19:24.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.411 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.411 "psk": "key0" 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_subsystem_add_ns", 00:19:24.411 "params": { 00:19:24.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.411 "namespace": { 00:19:24.411 "nsid": 1, 00:19:24.411 "bdev_name": "malloc0", 00:19:24.411 "nguid": "E23B42C4D0BB4219980D2FEB8BA727DC", 00:19:24.411 "uuid": "e23b42c4-d0bb-4219-980d-2feb8ba727dc", 00:19:24.411 "no_auto_visible": false 00:19:24.411 } 00:19:24.411 } 00:19:24.411 }, 00:19:24.411 { 00:19:24.411 "method": "nvmf_subsystem_add_listener", 00:19:24.411 "params": { 00:19:24.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.411 "listen_address": { 00:19:24.411 "trtype": "TCP", 00:19:24.411 "adrfam": "IPv4", 00:19:24.411 "traddr": "10.0.0.2", 00:19:24.411 "trsvcid": "4420" 00:19:24.411 }, 00:19:24.411 "secure_channel": true 00:19:24.411 } 00:19:24.411 } 00:19:24.412 ] 00:19:24.412 } 00:19:24.412 ] 00:19:24.412 }' 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2747167 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2747167 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2747167 ']' 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.412 14:09:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.412 [2024-12-05 14:09:30.670987] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:24.412 [2024-12-05 14:09:30.671042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.672 [2024-12-05 14:09:30.762249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.672 [2024-12-05 14:09:30.791267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.672 [2024-12-05 14:09:30.791296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.672 [2024-12-05 14:09:30.791301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.672 [2024-12-05 14:09:30.791306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.672 [2024-12-05 14:09:30.791310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.672 [2024-12-05 14:09:30.791782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.933 [2024-12-05 14:09:30.985311] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.933 [2024-12-05 14:09:31.017329] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:24.933 [2024-12-05 14:09:31.017517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.195 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.195 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.195 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:25.195 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.195 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2747345 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2747345 /var/tmp/bdevperf.sock 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2747345 ']' 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.456 14:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:25.456 "subsystems": [ 00:19:25.456 { 00:19:25.456 "subsystem": "keyring", 00:19:25.456 "config": [ 00:19:25.456 { 00:19:25.456 "method": "keyring_file_add_key", 00:19:25.456 "params": { 00:19:25.456 "name": "key0", 00:19:25.456 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:25.456 } 00:19:25.456 } 00:19:25.456 ] 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "subsystem": "iobuf", 00:19:25.456 "config": [ 00:19:25.456 { 00:19:25.456 "method": "iobuf_set_options", 00:19:25.456 "params": { 00:19:25.456 "small_pool_count": 8192, 00:19:25.456 "large_pool_count": 1024, 00:19:25.456 "small_bufsize": 8192, 00:19:25.456 "large_bufsize": 135168, 00:19:25.456 "enable_numa": false 00:19:25.456 } 00:19:25.456 } 00:19:25.456 ] 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "subsystem": "sock", 00:19:25.456 "config": [ 00:19:25.456 { 00:19:25.456 "method": "sock_set_default_impl", 00:19:25.456 "params": { 00:19:25.456 "impl_name": "posix" 00:19:25.456 } 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "method": "sock_impl_set_options", 00:19:25.456 "params": { 00:19:25.456 "impl_name": "ssl", 00:19:25.456 "recv_buf_size": 4096, 00:19:25.456 "send_buf_size": 4096, 00:19:25.456 "enable_recv_pipe": true, 00:19:25.456 "enable_quickack": false, 00:19:25.456 "enable_placement_id": 0, 00:19:25.456 "enable_zerocopy_send_server": true, 00:19:25.456 "enable_zerocopy_send_client": false, 00:19:25.456 "zerocopy_threshold": 0, 00:19:25.456 "tls_version": 0, 00:19:25.456 "enable_ktls": false 00:19:25.456 } 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "method": "sock_impl_set_options", 00:19:25.456 "params": { 00:19:25.456 "impl_name": "posix", 00:19:25.456 "recv_buf_size": 2097152, 00:19:25.456 "send_buf_size": 2097152, 00:19:25.456 "enable_recv_pipe": true, 00:19:25.456 "enable_quickack": false, 00:19:25.456 "enable_placement_id": 0, 00:19:25.456 "enable_zerocopy_send_server": true, 00:19:25.456 "enable_zerocopy_send_client": false, 00:19:25.456 "zerocopy_threshold": 0, 00:19:25.456 "tls_version": 0, 00:19:25.456 "enable_ktls": false 00:19:25.456 } 00:19:25.456 } 00:19:25.456 ] 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "subsystem": "vmd", 00:19:25.456 "config": [] 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "subsystem": "accel", 00:19:25.456 "config": [ 00:19:25.456 { 00:19:25.456 "method": "accel_set_options", 00:19:25.456 "params": { 00:19:25.456 "small_cache_size": 128, 00:19:25.456 "large_cache_size": 16, 00:19:25.456 "task_count": 2048, 00:19:25.456 "sequence_count": 2048, 00:19:25.456 "buf_count": 2048 00:19:25.456 } 00:19:25.456 } 00:19:25.456 ] 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "subsystem": "bdev", 00:19:25.456 "config": [ 00:19:25.456 { 00:19:25.456 "method": "bdev_set_options", 00:19:25.456 "params": { 00:19:25.456 "bdev_io_pool_size": 65535, 00:19:25.456 "bdev_io_cache_size": 256, 00:19:25.456 "bdev_auto_examine": true, 00:19:25.456 "iobuf_small_cache_size": 128, 00:19:25.456 "iobuf_large_cache_size": 16 00:19:25.456 } 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "method": "bdev_raid_set_options", 00:19:25.456 "params": { 00:19:25.456 "process_window_size_kb": 1024, 00:19:25.456 "process_max_bandwidth_mb_sec": 0 00:19:25.456 } 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "method": "bdev_iscsi_set_options", 00:19:25.456 "params": { 00:19:25.456 "timeout_sec": 30 00:19:25.456 } 00:19:25.456 }, 00:19:25.456 { 00:19:25.456 "method": "bdev_nvme_set_options", 00:19:25.456 "params": { 00:19:25.456 "action_on_timeout": "none", 00:19:25.456 "timeout_us": 0, 00:19:25.456 "timeout_admin_us": 0, 00:19:25.456 "keep_alive_timeout_ms": 10000, 00:19:25.456 "arbitration_burst": 0, 00:19:25.456 "low_priority_weight": 0, 00:19:25.456 "medium_priority_weight": 0, 00:19:25.456 "high_priority_weight": 0, 00:19:25.457 "nvme_adminq_poll_period_us": 10000, 00:19:25.457 "nvme_ioq_poll_period_us": 0, 00:19:25.457 "io_queue_requests": 512, 00:19:25.457 "delay_cmd_submit": true, 00:19:25.457 "transport_retry_count": 4, 00:19:25.457 "bdev_retry_count": 3, 00:19:25.457 "transport_ack_timeout": 0, 00:19:25.457 "ctrlr_loss_timeout_sec": 0, 00:19:25.457 "reconnect_delay_sec": 0, 00:19:25.457 "fast_io_fail_timeout_sec": 0, 00:19:25.457 "disable_auto_failback": false, 00:19:25.457 "generate_uuids": false, 00:19:25.457 "transport_tos": 0, 00:19:25.457 "nvme_error_stat": false, 00:19:25.457 "rdma_srq_size": 0, 00:19:25.457 "io_path_stat": false, 00:19:25.457 "allow_accel_sequence": false, 00:19:25.457 "rdma_max_cq_size": 0, 00:19:25.457 "rdma_cm_event_timeout_ms": 0, 00:19:25.457 "dhchap_digests": [ 00:19:25.457 "sha256", 00:19:25.457 "sha384", 00:19:25.457 "sha512" 00:19:25.457 ], 00:19:25.457 "dhchap_dhgroups": [ 00:19:25.457 "null", 00:19:25.457 "ffdhe2048", 00:19:25.457 "ffdhe3072", 00:19:25.457 "ffdhe4096", 00:19:25.457 "ffdhe6144", 00:19:25.457 "ffdhe8192" 00:19:25.457 ] 00:19:25.457 } 00:19:25.457 }, 00:19:25.457 { 00:19:25.457 "method": "bdev_nvme_attach_controller", 00:19:25.457 "params": { 00:19:25.457 "name": "TLSTEST", 00:19:25.457 "trtype": "TCP", 00:19:25.457 "adrfam": "IPv4", 00:19:25.457 "traddr": "10.0.0.2", 00:19:25.457 "trsvcid": "4420", 00:19:25.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.457 "prchk_reftag": false, 00:19:25.457 "prchk_guard": false, 00:19:25.457 "ctrlr_loss_timeout_sec": 0, 00:19:25.457 "reconnect_delay_sec": 0, 00:19:25.457 "fast_io_fail_timeout_sec": 0, 00:19:25.457 "psk": "key0", 00:19:25.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.457 "hdgst": false, 00:19:25.457 "ddgst": false, 00:19:25.457 "multipath": "multipath" 00:19:25.457 } 00:19:25.457 }, 00:19:25.457 { 00:19:25.457 "method": "bdev_nvme_set_hotplug", 00:19:25.457 "params": { 00:19:25.457 "period_us": 100000, 00:19:25.457 "enable": false 00:19:25.457 } 00:19:25.457 }, 00:19:25.457 { 00:19:25.457 "method": "bdev_wait_for_examine" 00:19:25.457 } 00:19:25.457 ] 00:19:25.457 }, 00:19:25.457 { 00:19:25.457 "subsystem": "nbd", 00:19:25.457 "config": [] 00:19:25.457 } 00:19:25.457 ] 00:19:25.457 }' 00:19:25.457 [2024-12-05 14:09:31.547919] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:25.457 [2024-12-05 14:09:31.547974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747345 ] 00:19:25.457 [2024-12-05 14:09:31.633288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.457 [2024-12-05 14:09:31.662360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.719 [2024-12-05 14:09:31.797344] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.292 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.292 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.292 14:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:26.292 Running I/O for 10 seconds... 00:19:28.178 6223.00 IOPS, 24.31 MiB/s [2024-12-05T13:09:35.862Z] 6318.00 IOPS, 24.68 MiB/s [2024-12-05T13:09:36.805Z] 6189.33 IOPS, 24.18 MiB/s [2024-12-05T13:09:37.747Z] 6033.25 IOPS, 23.57 MiB/s [2024-12-05T13:09:38.707Z] 5916.60 IOPS, 23.11 MiB/s [2024-12-05T13:09:39.649Z] 5990.17 IOPS, 23.40 MiB/s [2024-12-05T13:09:40.589Z] 5920.57 IOPS, 23.13 MiB/s [2024-12-05T13:09:41.530Z] 5946.88 IOPS, 23.23 MiB/s [2024-12-05T13:09:42.471Z] 5977.78 IOPS, 23.35 MiB/s [2024-12-05T13:09:42.731Z] 6006.10 IOPS, 23.46 MiB/s 00:19:36.431 Latency(us) 00:19:36.431 [2024-12-05T13:09:42.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.431 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.431 Verification LBA range: start 0x0 length 0x2000 00:19:36.431 TLSTESTn1 : 10.02 6006.61 23.46 0.00 0.00 21274.83 5297.49 25231.36 00:19:36.431 [2024-12-05T13:09:42.731Z] =================================================================================================================== 00:19:36.431 [2024-12-05T13:09:42.731Z] Total : 6006.61 23.46 0.00 0.00 21274.83 5297.49 25231.36 00:19:36.431 { 00:19:36.431 "results": [ 00:19:36.431 { 00:19:36.431 "job": "TLSTESTn1", 00:19:36.431 "core_mask": "0x4", 00:19:36.431 "workload": "verify", 00:19:36.431 "status": "finished", 00:19:36.431 "verify_range": { 00:19:36.431 "start": 0, 00:19:36.431 "length": 8192 00:19:36.431 }, 00:19:36.431 "queue_depth": 128, 00:19:36.431 "io_size": 4096, 00:19:36.431 "runtime": 10.020132, 00:19:36.432 "iops": 6006.607497785459, 00:19:36.432 "mibps": 23.463310538224448, 00:19:36.432 "io_failed": 0, 00:19:36.432 "io_timeout": 0, 00:19:36.432 "avg_latency_us": 21274.830341435856, 00:19:36.432 "min_latency_us": 5297.493333333333, 00:19:36.432 "max_latency_us": 25231.36 00:19:36.432 } 00:19:36.432 ], 00:19:36.432 "core_count": 1 00:19:36.432 } 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2747345 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2747345 ']' 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2747345 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747345 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747345' 00:19:36.432 killing process with pid 2747345 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2747345 00:19:36.432 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.432 00:19:36.432 Latency(us) 00:19:36.432 [2024-12-05T13:09:42.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.432 [2024-12-05T13:09:42.732Z] =================================================================================================================== 00:19:36.432 [2024-12-05T13:09:42.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2747345 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2747167 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2747167 ']' 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2747167 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.432 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2747167 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2747167' 00:19:36.693 killing process with pid 2747167 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2747167 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2747167 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2749560 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2749560 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2749560 ']' 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.693 14:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.693 [2024-12-05 14:09:42.918472] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:36.693 [2024-12-05 14:09:42.918529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.954 [2024-12-05 14:09:43.015746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.954 [2024-12-05 14:09:43.055878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.954 [2024-12-05 14:09:43.055923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.954 [2024-12-05 14:09:43.055932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.954 [2024-12-05 14:09:43.055938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.954 [2024-12-05 14:09:43.055944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.954 [2024-12-05 14:09:43.056617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.H4yWLgwwNm 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.H4yWLgwwNm 00:19:37.526 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.786 [2024-12-05 14:09:43.949572] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.786 14:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:38.047 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:38.047 [2024-12-05 14:09:44.342564] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:38.047 [2024-12-05 14:09:44.342899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.308 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:38.308 malloc0 00:19:38.308 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:38.569 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:38.830 14:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2750057 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2750057 /var/tmp/bdevperf.sock 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750057 ']' 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:39.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.091 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.091 [2024-12-05 14:09:45.196890] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:39.091 [2024-12-05 14:09:45.196965] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750057 ] 00:19:39.091 [2024-12-05 14:09:45.284709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.091 [2024-12-05 14:09:45.318420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.351 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.351 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:39.351 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:39.351 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:39.612 [2024-12-05 14:09:45.735987] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.612 nvme0n1 00:19:39.612 14:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:39.872 Running I/O for 1 seconds... 00:19:40.814 4703.00 IOPS, 18.37 MiB/s 00:19:40.814 Latency(us) 00:19:40.814 [2024-12-05T13:09:47.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.814 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:40.814 Verification LBA range: start 0x0 length 0x2000 00:19:40.814 nvme0n1 : 1.04 4656.12 18.19 0.00 0.00 27265.80 6198.61 54176.43 00:19:40.814 [2024-12-05T13:09:47.114Z] =================================================================================================================== 00:19:40.814 [2024-12-05T13:09:47.114Z] Total : 4656.12 18.19 0.00 0.00 27265.80 6198.61 54176.43 00:19:40.814 { 00:19:40.814 "results": [ 00:19:40.814 { 00:19:40.814 "job": "nvme0n1", 00:19:40.814 "core_mask": "0x2", 00:19:40.814 "workload": "verify", 00:19:40.814 "status": "finished", 00:19:40.814 "verify_range": { 00:19:40.814 "start": 0, 00:19:40.814 "length": 8192 00:19:40.814 }, 00:19:40.814 "queue_depth": 128, 00:19:40.814 "io_size": 4096, 00:19:40.814 "runtime": 1.03756, 00:19:40.814 "iops": 4656.116272793863, 00:19:40.814 "mibps": 18.187954190601026, 00:19:40.814 "io_failed": 0, 00:19:40.814 "io_timeout": 0, 00:19:40.814 "avg_latency_us": 27265.79710480922, 00:19:40.814 "min_latency_us": 6198.613333333334, 00:19:40.814 "max_latency_us": 54176.426666666666 00:19:40.814 } 00:19:40.814 ], 00:19:40.814 "core_count": 1 00:19:40.814 } 00:19:40.814 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2750057 00:19:40.814 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750057 ']' 00:19:40.814 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750057 00:19:40.814 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:40.814 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:40.814 14:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750057 00:19:40.814 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:40.814 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:40.814 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750057' 00:19:40.814 killing process with pid 2750057 00:19:40.814 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750057 00:19:40.814 Received shutdown signal, test time was about 1.000000 seconds 00:19:40.814 00:19:40.814 Latency(us) 00:19:40.814 [2024-12-05T13:09:47.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.814 [2024-12-05T13:09:47.114Z] =================================================================================================================== 00:19:40.814 [2024-12-05T13:09:47.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.814 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750057 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2749560 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2749560 ']' 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2749560 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2749560 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2749560' 00:19:41.075 killing process with pid 2749560 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2749560 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2749560 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2750409 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2750409 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750409 ']' 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.075 14:09:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.336 [2024-12-05 14:09:47.409370] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:41.336 [2024-12-05 14:09:47.409424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.336 [2024-12-05 14:09:47.504211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.336 [2024-12-05 14:09:47.549327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.336 [2024-12-05 14:09:47.549384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.336 [2024-12-05 14:09:47.549393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.336 [2024-12-05 14:09:47.549401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.336 [2024-12-05 14:09:47.549407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.336 [2024-12-05 14:09:47.550186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.280 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.281 [2024-12-05 14:09:48.277507] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.281 malloc0 00:19:42.281 [2024-12-05 14:09:48.307760] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.281 [2024-12-05 14:09:48.308078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2750718 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2750718 /var/tmp/bdevperf.sock 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2750718 ']' 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.281 14:09:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.281 [2024-12-05 14:09:48.392134] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:42.281 [2024-12-05 14:09:48.392194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750718 ] 00:19:42.281 [2024-12-05 14:09:48.479751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.281 [2024-12-05 14:09:48.513604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.220 14:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.220 14:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:43.221 14:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.H4yWLgwwNm 00:19:43.221 14:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:43.481 [2024-12-05 14:09:49.540753] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.481 nvme0n1 00:19:43.481 14:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.481 Running I/O for 1 seconds... 00:19:44.696 4801.00 IOPS, 18.75 MiB/s 00:19:44.696 Latency(us) 00:19:44.696 [2024-12-05T13:09:50.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.696 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:44.696 Verification LBA range: start 0x0 length 0x2000 00:19:44.696 nvme0n1 : 1.04 4754.17 18.57 0.00 0.00 26500.35 5160.96 32986.45 00:19:44.696 [2024-12-05T13:09:50.996Z] =================================================================================================================== 00:19:44.696 [2024-12-05T13:09:50.996Z] Total : 4754.17 18.57 0.00 0.00 26500.35 5160.96 32986.45 00:19:44.696 { 00:19:44.696 "results": [ 00:19:44.696 { 00:19:44.696 "job": "nvme0n1", 00:19:44.696 "core_mask": "0x2", 00:19:44.696 "workload": "verify", 00:19:44.696 "status": "finished", 00:19:44.696 "verify_range": { 00:19:44.696 "start": 0, 00:19:44.696 "length": 8192 00:19:44.696 }, 00:19:44.696 "queue_depth": 128, 00:19:44.696 "io_size": 4096, 00:19:44.696 "runtime": 1.036773, 00:19:44.696 "iops": 4754.174732559586, 00:19:44.696 "mibps": 18.570995049060883, 00:19:44.696 "io_failed": 0, 00:19:44.696 "io_timeout": 0, 00:19:44.696 "avg_latency_us": 26500.34716981132, 00:19:44.696 "min_latency_us": 5160.96, 00:19:44.696 "max_latency_us": 32986.45333333333 00:19:44.696 } 00:19:44.696 ], 00:19:44.696 "core_count": 1 00:19:44.696 } 00:19:44.696 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:44.696 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.696 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.696 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.696 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:44.696 "subsystems": [ 00:19:44.696 { 00:19:44.696 "subsystem": "keyring", 00:19:44.696 "config": [ 00:19:44.696 { 00:19:44.696 "method": "keyring_file_add_key", 00:19:44.696 "params": { 00:19:44.696 "name": "key0", 00:19:44.696 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:44.696 } 00:19:44.696 } 00:19:44.696 ] 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "subsystem": "iobuf", 00:19:44.696 "config": [ 00:19:44.696 { 00:19:44.696 "method": "iobuf_set_options", 00:19:44.696 "params": { 00:19:44.696 "small_pool_count": 8192, 00:19:44.696 "large_pool_count": 1024, 00:19:44.696 "small_bufsize": 8192, 00:19:44.696 "large_bufsize": 135168, 00:19:44.696 "enable_numa": false 00:19:44.696 } 00:19:44.696 } 00:19:44.696 ] 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "subsystem": "sock", 00:19:44.696 "config": [ 00:19:44.696 { 00:19:44.696 "method": "sock_set_default_impl", 00:19:44.696 "params": { 00:19:44.696 "impl_name": "posix" 00:19:44.696 } 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "method": "sock_impl_set_options", 00:19:44.696 "params": { 00:19:44.696 "impl_name": "ssl", 00:19:44.696 "recv_buf_size": 4096, 00:19:44.696 "send_buf_size": 4096, 00:19:44.696 "enable_recv_pipe": true, 00:19:44.696 "enable_quickack": false, 00:19:44.696 "enable_placement_id": 0, 00:19:44.696 "enable_zerocopy_send_server": true, 00:19:44.696 "enable_zerocopy_send_client": false, 00:19:44.696 "zerocopy_threshold": 0, 00:19:44.696 "tls_version": 0, 00:19:44.696 "enable_ktls": false 00:19:44.696 } 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "method": "sock_impl_set_options", 00:19:44.696 "params": { 00:19:44.696 "impl_name": "posix", 00:19:44.696 "recv_buf_size": 2097152, 00:19:44.696 "send_buf_size": 2097152, 00:19:44.696 "enable_recv_pipe": true, 00:19:44.696 "enable_quickack": false, 00:19:44.696 "enable_placement_id": 0, 00:19:44.696 "enable_zerocopy_send_server": true, 00:19:44.696 "enable_zerocopy_send_client": false, 00:19:44.696 "zerocopy_threshold": 0, 00:19:44.696 "tls_version": 0, 00:19:44.696 "enable_ktls": false 00:19:44.696 } 00:19:44.696 } 00:19:44.696 ] 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "subsystem": "vmd", 00:19:44.696 "config": [] 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "subsystem": "accel", 00:19:44.696 "config": [ 00:19:44.696 { 00:19:44.696 "method": "accel_set_options", 00:19:44.696 "params": { 00:19:44.696 "small_cache_size": 128, 00:19:44.696 "large_cache_size": 16, 00:19:44.696 "task_count": 2048, 00:19:44.696 "sequence_count": 2048, 00:19:44.696 "buf_count": 2048 00:19:44.696 } 00:19:44.696 } 00:19:44.696 ] 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "subsystem": "bdev", 00:19:44.696 "config": [ 00:19:44.696 { 00:19:44.696 "method": "bdev_set_options", 00:19:44.696 "params": { 00:19:44.696 "bdev_io_pool_size": 65535, 00:19:44.696 "bdev_io_cache_size": 256, 00:19:44.696 "bdev_auto_examine": true, 00:19:44.696 "iobuf_small_cache_size": 128, 00:19:44.696 "iobuf_large_cache_size": 16 00:19:44.696 } 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "method": "bdev_raid_set_options", 00:19:44.696 "params": { 00:19:44.696 "process_window_size_kb": 1024, 00:19:44.696 "process_max_bandwidth_mb_sec": 0 00:19:44.696 } 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "method": "bdev_iscsi_set_options", 00:19:44.696 "params": { 00:19:44.696 "timeout_sec": 30 00:19:44.696 } 00:19:44.696 }, 00:19:44.696 { 00:19:44.696 "method": "bdev_nvme_set_options", 00:19:44.696 "params": { 00:19:44.696 "action_on_timeout": "none", 00:19:44.696 "timeout_us": 0, 00:19:44.696 "timeout_admin_us": 0, 00:19:44.696 "keep_alive_timeout_ms": 10000, 00:19:44.696 "arbitration_burst": 0, 00:19:44.696 "low_priority_weight": 0, 00:19:44.696 "medium_priority_weight": 0, 00:19:44.696 "high_priority_weight": 0, 00:19:44.696 "nvme_adminq_poll_period_us": 10000, 00:19:44.696 "nvme_ioq_poll_period_us": 0, 00:19:44.696 "io_queue_requests": 0, 00:19:44.696 "delay_cmd_submit": true, 00:19:44.696 "transport_retry_count": 4, 00:19:44.696 "bdev_retry_count": 3, 00:19:44.696 "transport_ack_timeout": 0, 00:19:44.697 "ctrlr_loss_timeout_sec": 0, 00:19:44.697 "reconnect_delay_sec": 0, 00:19:44.697 "fast_io_fail_timeout_sec": 0, 00:19:44.697 "disable_auto_failback": false, 00:19:44.697 "generate_uuids": false, 00:19:44.697 "transport_tos": 0, 00:19:44.697 "nvme_error_stat": false, 00:19:44.697 "rdma_srq_size": 0, 00:19:44.697 "io_path_stat": false, 00:19:44.697 "allow_accel_sequence": false, 00:19:44.697 "rdma_max_cq_size": 0, 00:19:44.697 "rdma_cm_event_timeout_ms": 0, 00:19:44.697 "dhchap_digests": [ 00:19:44.697 "sha256", 00:19:44.697 "sha384", 00:19:44.697 "sha512" 00:19:44.697 ], 00:19:44.697 "dhchap_dhgroups": [ 00:19:44.697 "null", 00:19:44.697 "ffdhe2048", 00:19:44.697 "ffdhe3072", 00:19:44.697 "ffdhe4096", 00:19:44.697 "ffdhe6144", 00:19:44.697 "ffdhe8192" 00:19:44.697 ] 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "bdev_nvme_set_hotplug", 00:19:44.697 "params": { 00:19:44.697 "period_us": 100000, 00:19:44.697 "enable": false 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "bdev_malloc_create", 00:19:44.697 "params": { 00:19:44.697 "name": "malloc0", 00:19:44.697 "num_blocks": 8192, 00:19:44.697 "block_size": 4096, 00:19:44.697 "physical_block_size": 4096, 00:19:44.697 "uuid": "22d6cb31-2ba0-4f10-a053-8be59ebcb391", 00:19:44.697 "optimal_io_boundary": 0, 00:19:44.697 "md_size": 0, 00:19:44.697 "dif_type": 0, 00:19:44.697 "dif_is_head_of_md": false, 00:19:44.697 "dif_pi_format": 0 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "bdev_wait_for_examine" 00:19:44.697 } 00:19:44.697 ] 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "subsystem": "nbd", 00:19:44.697 "config": [] 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "subsystem": "scheduler", 00:19:44.697 "config": [ 00:19:44.697 { 00:19:44.697 "method": "framework_set_scheduler", 00:19:44.697 "params": { 00:19:44.697 "name": "static" 00:19:44.697 } 00:19:44.697 } 00:19:44.697 ] 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "subsystem": "nvmf", 00:19:44.697 "config": [ 00:19:44.697 { 00:19:44.697 "method": "nvmf_set_config", 00:19:44.697 "params": { 00:19:44.697 "discovery_filter": "match_any", 00:19:44.697 "admin_cmd_passthru": { 00:19:44.697 "identify_ctrlr": false 00:19:44.697 }, 00:19:44.697 "dhchap_digests": [ 00:19:44.697 "sha256", 00:19:44.697 "sha384", 00:19:44.697 "sha512" 00:19:44.697 ], 00:19:44.697 "dhchap_dhgroups": [ 00:19:44.697 "null", 00:19:44.697 "ffdhe2048", 00:19:44.697 "ffdhe3072", 00:19:44.697 "ffdhe4096", 00:19:44.697 "ffdhe6144", 00:19:44.697 "ffdhe8192" 00:19:44.697 ] 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_set_max_subsystems", 00:19:44.697 "params": { 00:19:44.697 "max_subsystems": 1024 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_set_crdt", 00:19:44.697 "params": { 00:19:44.697 "crdt1": 0, 00:19:44.697 "crdt2": 0, 00:19:44.697 "crdt3": 0 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_create_transport", 00:19:44.697 "params": { 00:19:44.697 "trtype": "TCP", 00:19:44.697 "max_queue_depth": 128, 00:19:44.697 "max_io_qpairs_per_ctrlr": 127, 00:19:44.697 "in_capsule_data_size": 4096, 00:19:44.697 "max_io_size": 131072, 00:19:44.697 "io_unit_size": 131072, 00:19:44.697 "max_aq_depth": 128, 00:19:44.697 "num_shared_buffers": 511, 00:19:44.697 "buf_cache_size": 4294967295, 00:19:44.697 "dif_insert_or_strip": false, 00:19:44.697 "zcopy": false, 00:19:44.697 "c2h_success": false, 00:19:44.697 "sock_priority": 0, 00:19:44.697 "abort_timeout_sec": 1, 00:19:44.697 "ack_timeout": 0, 00:19:44.697 "data_wr_pool_size": 0 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_create_subsystem", 00:19:44.697 "params": { 00:19:44.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.697 "allow_any_host": false, 00:19:44.697 "serial_number": "00000000000000000000", 00:19:44.697 "model_number": "SPDK bdev Controller", 00:19:44.697 "max_namespaces": 32, 00:19:44.697 "min_cntlid": 1, 00:19:44.697 "max_cntlid": 65519, 00:19:44.697 "ana_reporting": false 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_subsystem_add_host", 00:19:44.697 "params": { 00:19:44.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.697 "host": "nqn.2016-06.io.spdk:host1", 00:19:44.697 "psk": "key0" 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_subsystem_add_ns", 00:19:44.697 "params": { 00:19:44.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.697 "namespace": { 00:19:44.697 "nsid": 1, 00:19:44.697 "bdev_name": "malloc0", 00:19:44.697 "nguid": "22D6CB312BA04F10A0538BE59EBCB391", 00:19:44.697 "uuid": "22d6cb31-2ba0-4f10-a053-8be59ebcb391", 00:19:44.697 "no_auto_visible": false 00:19:44.697 } 00:19:44.697 } 00:19:44.697 }, 00:19:44.697 { 00:19:44.697 "method": "nvmf_subsystem_add_listener", 00:19:44.697 "params": { 00:19:44.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.697 "listen_address": { 00:19:44.697 "trtype": "TCP", 00:19:44.697 "adrfam": "IPv4", 00:19:44.697 "traddr": "10.0.0.2", 00:19:44.697 "trsvcid": "4420" 00:19:44.697 }, 00:19:44.697 "secure_channel": false, 00:19:44.697 "sock_impl": "ssl" 00:19:44.697 } 00:19:44.697 } 00:19:44.697 ] 00:19:44.697 } 00:19:44.697 ] 00:19:44.697 }' 00:19:44.697 14:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:44.957 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:44.957 "subsystems": [ 00:19:44.957 { 00:19:44.957 "subsystem": "keyring", 00:19:44.957 "config": [ 00:19:44.957 { 00:19:44.957 "method": "keyring_file_add_key", 00:19:44.957 "params": { 00:19:44.957 "name": "key0", 00:19:44.957 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:44.957 } 00:19:44.957 } 00:19:44.957 ] 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "subsystem": "iobuf", 00:19:44.957 "config": [ 00:19:44.957 { 00:19:44.957 "method": "iobuf_set_options", 00:19:44.957 "params": { 00:19:44.957 "small_pool_count": 8192, 00:19:44.957 "large_pool_count": 1024, 00:19:44.957 "small_bufsize": 8192, 00:19:44.957 "large_bufsize": 135168, 00:19:44.957 "enable_numa": false 00:19:44.957 } 00:19:44.957 } 00:19:44.957 ] 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "subsystem": "sock", 00:19:44.957 "config": [ 00:19:44.957 { 00:19:44.957 "method": "sock_set_default_impl", 00:19:44.957 "params": { 00:19:44.957 "impl_name": "posix" 00:19:44.957 } 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "method": "sock_impl_set_options", 00:19:44.957 "params": { 00:19:44.957 "impl_name": "ssl", 00:19:44.957 "recv_buf_size": 4096, 00:19:44.957 "send_buf_size": 4096, 00:19:44.957 "enable_recv_pipe": true, 00:19:44.957 "enable_quickack": false, 00:19:44.957 "enable_placement_id": 0, 00:19:44.957 "enable_zerocopy_send_server": true, 00:19:44.957 "enable_zerocopy_send_client": false, 00:19:44.957 "zerocopy_threshold": 0, 00:19:44.957 "tls_version": 0, 00:19:44.957 "enable_ktls": false 00:19:44.957 } 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "method": "sock_impl_set_options", 00:19:44.957 "params": { 00:19:44.957 "impl_name": "posix", 00:19:44.957 "recv_buf_size": 2097152, 00:19:44.957 "send_buf_size": 2097152, 00:19:44.957 "enable_recv_pipe": true, 00:19:44.957 "enable_quickack": false, 00:19:44.957 "enable_placement_id": 0, 00:19:44.957 "enable_zerocopy_send_server": true, 00:19:44.957 "enable_zerocopy_send_client": false, 00:19:44.957 "zerocopy_threshold": 0, 00:19:44.957 "tls_version": 0, 00:19:44.957 "enable_ktls": false 00:19:44.957 } 00:19:44.957 } 00:19:44.957 ] 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "subsystem": "vmd", 00:19:44.957 "config": [] 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "subsystem": "accel", 00:19:44.957 "config": [ 00:19:44.957 { 00:19:44.957 "method": "accel_set_options", 00:19:44.957 "params": { 00:19:44.957 "small_cache_size": 128, 00:19:44.957 "large_cache_size": 16, 00:19:44.957 "task_count": 2048, 00:19:44.957 "sequence_count": 2048, 00:19:44.957 "buf_count": 2048 00:19:44.957 } 00:19:44.957 } 00:19:44.957 ] 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "subsystem": "bdev", 00:19:44.957 "config": [ 00:19:44.957 { 00:19:44.957 "method": "bdev_set_options", 00:19:44.957 "params": { 00:19:44.957 "bdev_io_pool_size": 65535, 00:19:44.957 "bdev_io_cache_size": 256, 00:19:44.957 "bdev_auto_examine": true, 00:19:44.957 "iobuf_small_cache_size": 128, 00:19:44.957 "iobuf_large_cache_size": 16 00:19:44.957 } 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "method": "bdev_raid_set_options", 00:19:44.957 "params": { 00:19:44.957 "process_window_size_kb": 1024, 00:19:44.957 "process_max_bandwidth_mb_sec": 0 00:19:44.957 } 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "method": "bdev_iscsi_set_options", 00:19:44.957 "params": { 00:19:44.957 "timeout_sec": 30 00:19:44.957 } 00:19:44.957 }, 00:19:44.957 { 00:19:44.957 "method": "bdev_nvme_set_options", 00:19:44.957 "params": { 00:19:44.957 "action_on_timeout": "none", 00:19:44.957 "timeout_us": 0, 00:19:44.957 "timeout_admin_us": 0, 00:19:44.957 "keep_alive_timeout_ms": 10000, 00:19:44.957 "arbitration_burst": 0, 00:19:44.957 "low_priority_weight": 0, 00:19:44.957 "medium_priority_weight": 0, 00:19:44.957 "high_priority_weight": 0, 00:19:44.957 "nvme_adminq_poll_period_us": 10000, 00:19:44.957 "nvme_ioq_poll_period_us": 0, 00:19:44.957 "io_queue_requests": 512, 00:19:44.957 "delay_cmd_submit": true, 00:19:44.957 "transport_retry_count": 4, 00:19:44.957 "bdev_retry_count": 3, 00:19:44.957 "transport_ack_timeout": 0, 00:19:44.957 "ctrlr_loss_timeout_sec": 0, 00:19:44.957 "reconnect_delay_sec": 0, 00:19:44.957 "fast_io_fail_timeout_sec": 0, 00:19:44.957 "disable_auto_failback": false, 00:19:44.957 "generate_uuids": false, 00:19:44.958 "transport_tos": 0, 00:19:44.958 "nvme_error_stat": false, 00:19:44.958 "rdma_srq_size": 0, 00:19:44.958 "io_path_stat": false, 00:19:44.958 "allow_accel_sequence": false, 00:19:44.958 "rdma_max_cq_size": 0, 00:19:44.958 "rdma_cm_event_timeout_ms": 0, 00:19:44.958 "dhchap_digests": [ 00:19:44.958 "sha256", 00:19:44.958 "sha384", 00:19:44.958 "sha512" 00:19:44.958 ], 00:19:44.958 "dhchap_dhgroups": [ 00:19:44.958 "null", 00:19:44.958 "ffdhe2048", 00:19:44.958 "ffdhe3072", 00:19:44.958 "ffdhe4096", 00:19:44.958 "ffdhe6144", 00:19:44.958 "ffdhe8192" 00:19:44.958 ] 00:19:44.958 } 00:19:44.958 }, 00:19:44.958 { 00:19:44.958 "method": "bdev_nvme_attach_controller", 00:19:44.958 "params": { 00:19:44.958 "name": "nvme0", 00:19:44.958 "trtype": "TCP", 00:19:44.958 "adrfam": "IPv4", 00:19:44.958 "traddr": "10.0.0.2", 00:19:44.958 "trsvcid": "4420", 00:19:44.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.958 "prchk_reftag": false, 00:19:44.958 "prchk_guard": false, 00:19:44.958 "ctrlr_loss_timeout_sec": 0, 00:19:44.958 "reconnect_delay_sec": 0, 00:19:44.958 "fast_io_fail_timeout_sec": 0, 00:19:44.958 "psk": "key0", 00:19:44.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.958 "hdgst": false, 00:19:44.958 "ddgst": false, 00:19:44.958 "multipath": "multipath" 00:19:44.958 } 00:19:44.958 }, 00:19:44.958 { 00:19:44.958 "method": "bdev_nvme_set_hotplug", 00:19:44.958 "params": { 00:19:44.958 "period_us": 100000, 00:19:44.958 "enable": false 00:19:44.958 } 00:19:44.958 }, 00:19:44.958 { 00:19:44.958 "method": "bdev_enable_histogram", 00:19:44.958 "params": { 00:19:44.958 "name": "nvme0n1", 00:19:44.958 "enable": true 00:19:44.958 } 00:19:44.958 }, 00:19:44.958 { 00:19:44.958 "method": "bdev_wait_for_examine" 00:19:44.958 } 00:19:44.958 ] 00:19:44.958 }, 00:19:44.958 { 00:19:44.958 "subsystem": "nbd", 00:19:44.958 "config": [] 00:19:44.958 } 00:19:44.958 ] 00:19:44.958 }' 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2750718 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750718 ']' 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750718 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750718 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750718' 00:19:44.958 killing process with pid 2750718 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750718 00:19:44.958 Received shutdown signal, test time was about 1.000000 seconds 00:19:44.958 00:19:44.958 Latency(us) 00:19:44.958 [2024-12-05T13:09:51.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.958 [2024-12-05T13:09:51.258Z] =================================================================================================================== 00:19:44.958 [2024-12-05T13:09:51.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.958 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750718 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2750409 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2750409 ']' 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2750409 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2750409 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2750409' 00:19:45.218 killing process with pid 2750409 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2750409 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2750409 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.218 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:45.218 "subsystems": [ 00:19:45.218 { 00:19:45.218 "subsystem": "keyring", 00:19:45.218 "config": [ 00:19:45.218 { 00:19:45.218 "method": "keyring_file_add_key", 00:19:45.218 "params": { 00:19:45.218 "name": "key0", 00:19:45.218 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:45.218 } 00:19:45.218 } 00:19:45.218 ] 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "subsystem": "iobuf", 00:19:45.218 "config": [ 00:19:45.218 { 00:19:45.218 "method": "iobuf_set_options", 00:19:45.218 "params": { 00:19:45.218 "small_pool_count": 8192, 00:19:45.218 "large_pool_count": 1024, 00:19:45.218 "small_bufsize": 8192, 00:19:45.218 "large_bufsize": 135168, 00:19:45.218 "enable_numa": false 00:19:45.218 } 00:19:45.218 } 00:19:45.218 ] 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "subsystem": "sock", 00:19:45.218 "config": [ 00:19:45.218 { 00:19:45.218 "method": "sock_set_default_impl", 00:19:45.218 "params": { 00:19:45.218 "impl_name": "posix" 00:19:45.218 } 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "method": "sock_impl_set_options", 00:19:45.218 "params": { 00:19:45.218 "impl_name": "ssl", 00:19:45.218 "recv_buf_size": 4096, 00:19:45.218 "send_buf_size": 4096, 00:19:45.218 "enable_recv_pipe": true, 00:19:45.218 "enable_quickack": false, 00:19:45.218 "enable_placement_id": 0, 00:19:45.218 "enable_zerocopy_send_server": true, 00:19:45.218 "enable_zerocopy_send_client": false, 00:19:45.218 "zerocopy_threshold": 0, 00:19:45.218 "tls_version": 0, 00:19:45.218 "enable_ktls": false 00:19:45.218 } 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "method": "sock_impl_set_options", 00:19:45.218 "params": { 00:19:45.218 "impl_name": "posix", 00:19:45.218 "recv_buf_size": 2097152, 00:19:45.218 "send_buf_size": 2097152, 00:19:45.218 "enable_recv_pipe": true, 00:19:45.218 "enable_quickack": false, 00:19:45.218 "enable_placement_id": 0, 00:19:45.218 "enable_zerocopy_send_server": true, 00:19:45.218 "enable_zerocopy_send_client": false, 00:19:45.218 "zerocopy_threshold": 0, 00:19:45.218 "tls_version": 0, 00:19:45.218 "enable_ktls": false 00:19:45.218 } 00:19:45.218 } 00:19:45.218 ] 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "subsystem": "vmd", 00:19:45.218 "config": [] 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "subsystem": "accel", 00:19:45.218 "config": [ 00:19:45.218 { 00:19:45.218 "method": "accel_set_options", 00:19:45.218 "params": { 00:19:45.218 "small_cache_size": 128, 00:19:45.218 "large_cache_size": 16, 00:19:45.218 "task_count": 2048, 00:19:45.218 "sequence_count": 2048, 00:19:45.218 "buf_count": 2048 00:19:45.218 } 00:19:45.218 } 00:19:45.218 ] 00:19:45.218 }, 00:19:45.218 { 00:19:45.218 "subsystem": "bdev", 00:19:45.218 "config": [ 00:19:45.218 { 00:19:45.218 "method": "bdev_set_options", 00:19:45.218 "params": { 00:19:45.219 "bdev_io_pool_size": 65535, 00:19:45.219 "bdev_io_cache_size": 256, 00:19:45.219 "bdev_auto_examine": true, 00:19:45.219 "iobuf_small_cache_size": 128, 00:19:45.219 "iobuf_large_cache_size": 16 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "bdev_raid_set_options", 00:19:45.219 "params": { 00:19:45.219 "process_window_size_kb": 1024, 00:19:45.219 "process_max_bandwidth_mb_sec": 0 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "bdev_iscsi_set_options", 00:19:45.219 "params": { 00:19:45.219 "timeout_sec": 30 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "bdev_nvme_set_options", 00:19:45.219 "params": { 00:19:45.219 "action_on_timeout": "none", 00:19:45.219 "timeout_us": 0, 00:19:45.219 "timeout_admin_us": 0, 00:19:45.219 "keep_alive_timeout_ms": 10000, 00:19:45.219 "arbitration_burst": 0, 00:19:45.219 "low_priority_weight": 0, 00:19:45.219 "medium_priority_weight": 0, 00:19:45.219 "high_priority_weight": 0, 00:19:45.219 "nvme_adminq_poll_period_us": 10000, 00:19:45.219 "nvme_ioq_poll_period_us": 0, 00:19:45.219 "io_queue_requests": 0, 00:19:45.219 "delay_cmd_submit": true, 00:19:45.219 "transport_retry_count": 4, 00:19:45.219 "bdev_retry_count": 3, 00:19:45.219 "transport_ack_timeout": 0, 00:19:45.219 "ctrlr_loss_timeout_sec": 0, 00:19:45.219 "reconnect_delay_sec": 0, 00:19:45.219 "fast_io_fail_timeout_sec": 0, 00:19:45.219 "disable_auto_failback": false, 00:19:45.219 "generate_uuids": false, 00:19:45.219 "transport_tos": 0, 00:19:45.219 "nvme_error_stat": false, 00:19:45.219 "rdma_srq_size": 0, 00:19:45.219 "io_path_stat": false, 00:19:45.219 "allow_accel_sequence": false, 00:19:45.219 "rdma_max_cq_size": 0, 00:19:45.219 "rdma_cm_event_timeout_ms": 0, 00:19:45.219 "dhchap_digests": [ 00:19:45.219 "sha256", 00:19:45.219 "sha384", 00:19:45.219 "sha512" 00:19:45.219 ], 00:19:45.219 "dhchap_dhgroups": [ 00:19:45.219 "null", 00:19:45.219 "ffdhe2048", 00:19:45.219 "ffdhe3072", 00:19:45.219 "ffdhe4096", 00:19:45.219 "ffdhe6144", 00:19:45.219 "ffdhe8192" 00:19:45.219 ] 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "bdev_nvme_set_hotplug", 00:19:45.219 "params": { 00:19:45.219 "period_us": 100000, 00:19:45.219 "enable": false 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "bdev_malloc_create", 00:19:45.219 "params": { 00:19:45.219 "name": "malloc0", 00:19:45.219 "num_blocks": 8192, 00:19:45.219 "block_size": 4096, 00:19:45.219 "physical_block_size": 4096, 00:19:45.219 "uuid": "22d6cb31-2ba0-4f10-a053-8be59ebcb391", 00:19:45.219 "optimal_io_boundary": 0, 00:19:45.219 "md_size": 0, 00:19:45.219 "dif_type": 0, 00:19:45.219 "dif_is_head_of_md": false, 00:19:45.219 "dif_pi_format": 0 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "bdev_wait_for_examine" 00:19:45.219 } 00:19:45.219 ] 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "subsystem": "nbd", 00:19:45.219 "config": [] 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "subsystem": "scheduler", 00:19:45.219 "config": [ 00:19:45.219 { 00:19:45.219 "method": "framework_set_scheduler", 00:19:45.219 "params": { 00:19:45.219 "name": "static" 00:19:45.219 } 00:19:45.219 } 00:19:45.219 ] 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "subsystem": "nvmf", 00:19:45.219 "config": [ 00:19:45.219 { 00:19:45.219 "method": "nvmf_set_config", 00:19:45.219 "params": { 00:19:45.219 "discovery_filter": "match_any", 00:19:45.219 "admin_cmd_passthru": { 00:19:45.219 "identify_ctrlr": false 00:19:45.219 }, 00:19:45.219 "dhchap_digests": [ 00:19:45.219 "sha256", 00:19:45.219 "sha384", 00:19:45.219 "sha512" 00:19:45.219 ], 00:19:45.219 "dhchap_dhgroups": [ 00:19:45.219 "null", 00:19:45.219 "ffdhe2048", 00:19:45.219 "ffdhe3072", 00:19:45.219 "ffdhe4096", 00:19:45.219 "ffdhe6144", 00:19:45.219 "ffdhe8192" 00:19:45.219 ] 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_set_max_subsystems", 00:19:45.219 "params": { 00:19:45.219 "max_subsystems": 1024 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_set_crdt", 00:19:45.219 "params": { 00:19:45.219 "crdt1": 0, 00:19:45.219 "crdt2": 0, 00:19:45.219 "crdt3": 0 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_create_transport", 00:19:45.219 "params": { 00:19:45.219 "trtype": "TCP", 00:19:45.219 "max_queue_depth": 128, 00:19:45.219 "max_io_qpairs_per_ctrlr": 127, 00:19:45.219 "in_capsule_data_size": 4096, 00:19:45.219 "max_io_size": 131072, 00:19:45.219 "io_unit_size": 131072, 00:19:45.219 "max_aq_depth": 128, 00:19:45.219 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.219 "num_shared_buffers": 511, 00:19:45.219 "buf_cache_size": 4294967295, 00:19:45.219 "dif_insert_or_strip": false, 00:19:45.219 "zcopy": false, 00:19:45.219 "c2h_success": false, 00:19:45.219 "sock_priority": 0, 00:19:45.219 "abort_timeout_sec": 1, 00:19:45.219 "ack_timeout": 0, 00:19:45.219 "data_wr_pool_size": 0 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_create_subsystem", 00:19:45.219 "params": { 00:19:45.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.219 "allow_any_host": false, 00:19:45.219 "serial_number": "00000000000000000000", 00:19:45.219 "model_number": "SPDK bdev Controller", 00:19:45.219 "max_namespaces": 32, 00:19:45.219 "min_cntlid": 1, 00:19:45.219 "max_cntlid": 65519, 00:19:45.219 "ana_reporting": false 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_subsystem_add_host", 00:19:45.219 "params": { 00:19:45.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.219 "host": "nqn.2016-06.io.spdk:host1", 00:19:45.219 "psk": "key0" 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_subsystem_add_ns", 00:19:45.219 "params": { 00:19:45.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.219 "namespace": { 00:19:45.219 "nsid": 1, 00:19:45.219 "bdev_name": "malloc0", 00:19:45.219 "nguid": "22D6CB312BA04F10A0538BE59EBCB391", 00:19:45.219 "uuid": "22d6cb31-2ba0-4f10-a053-8be59ebcb391", 00:19:45.219 "no_auto_visible": false 00:19:45.219 } 00:19:45.219 } 00:19:45.219 }, 00:19:45.219 { 00:19:45.219 "method": "nvmf_subsystem_add_listener", 00:19:45.219 "params": { 00:19:45.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:45.219 "listen_address": { 00:19:45.219 "trtype": "TCP", 00:19:45.219 "adrfam": "IPv4", 00:19:45.219 "traddr": "10.0.0.2", 00:19:45.219 "trsvcid": "4420" 00:19:45.219 }, 00:19:45.219 "secure_channel": false, 00:19:45.219 "sock_impl": "ssl" 00:19:45.219 } 00:19:45.219 } 00:19:45.219 ] 00:19:45.219 } 00:19:45.219 ] 00:19:45.219 }' 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2751273 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2751273 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2751273 ']' 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.479 14:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.479 [2024-12-05 14:09:51.571028] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:45.479 [2024-12-05 14:09:51.571086] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.479 [2024-12-05 14:09:51.661614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.479 [2024-12-05 14:09:51.691005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.479 [2024-12-05 14:09:51.691032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.479 [2024-12-05 14:09:51.691038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.479 [2024-12-05 14:09:51.691042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.479 [2024-12-05 14:09:51.691046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.479 [2024-12-05 14:09:51.691529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.739 [2024-12-05 14:09:51.885698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.739 [2024-12-05 14:09:51.917735] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.739 [2024-12-05 14:09:51.917928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2751476 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2751476 /var/tmp/bdevperf.sock 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2751476 ']' 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.325 14:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:46.325 "subsystems": [ 00:19:46.325 { 00:19:46.325 "subsystem": "keyring", 00:19:46.325 "config": [ 00:19:46.325 { 00:19:46.325 "method": "keyring_file_add_key", 00:19:46.325 "params": { 00:19:46.325 "name": "key0", 00:19:46.326 "path": "/tmp/tmp.H4yWLgwwNm" 00:19:46.326 } 00:19:46.326 } 00:19:46.326 ] 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "subsystem": "iobuf", 00:19:46.326 "config": [ 00:19:46.326 { 00:19:46.326 "method": "iobuf_set_options", 00:19:46.326 "params": { 00:19:46.326 "small_pool_count": 8192, 00:19:46.326 "large_pool_count": 1024, 00:19:46.326 "small_bufsize": 8192, 00:19:46.326 "large_bufsize": 135168, 00:19:46.326 "enable_numa": false 00:19:46.326 } 00:19:46.326 } 00:19:46.326 ] 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "subsystem": "sock", 00:19:46.326 "config": [ 00:19:46.326 { 00:19:46.326 "method": "sock_set_default_impl", 00:19:46.326 "params": { 00:19:46.326 "impl_name": "posix" 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "sock_impl_set_options", 00:19:46.326 "params": { 00:19:46.326 "impl_name": "ssl", 00:19:46.326 "recv_buf_size": 4096, 00:19:46.326 "send_buf_size": 4096, 00:19:46.326 "enable_recv_pipe": true, 00:19:46.326 "enable_quickack": false, 00:19:46.326 "enable_placement_id": 0, 00:19:46.326 "enable_zerocopy_send_server": true, 00:19:46.326 "enable_zerocopy_send_client": false, 00:19:46.326 "zerocopy_threshold": 0, 00:19:46.326 "tls_version": 0, 00:19:46.326 "enable_ktls": false 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "sock_impl_set_options", 00:19:46.326 "params": { 00:19:46.326 "impl_name": "posix", 00:19:46.326 "recv_buf_size": 2097152, 00:19:46.326 "send_buf_size": 2097152, 00:19:46.326 "enable_recv_pipe": true, 00:19:46.326 "enable_quickack": false, 00:19:46.326 "enable_placement_id": 0, 00:19:46.326 "enable_zerocopy_send_server": true, 00:19:46.326 "enable_zerocopy_send_client": false, 00:19:46.326 "zerocopy_threshold": 0, 00:19:46.326 "tls_version": 0, 00:19:46.326 "enable_ktls": false 00:19:46.326 } 00:19:46.326 } 00:19:46.326 ] 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "subsystem": "vmd", 00:19:46.326 "config": [] 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "subsystem": "accel", 00:19:46.326 "config": [ 00:19:46.326 { 00:19:46.326 "method": "accel_set_options", 00:19:46.326 "params": { 00:19:46.326 "small_cache_size": 128, 00:19:46.326 "large_cache_size": 16, 00:19:46.326 "task_count": 2048, 00:19:46.326 "sequence_count": 2048, 00:19:46.326 "buf_count": 2048 00:19:46.326 } 00:19:46.326 } 00:19:46.326 ] 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "subsystem": "bdev", 00:19:46.326 "config": [ 00:19:46.326 { 00:19:46.326 "method": "bdev_set_options", 00:19:46.326 "params": { 00:19:46.326 "bdev_io_pool_size": 65535, 00:19:46.326 "bdev_io_cache_size": 256, 00:19:46.326 "bdev_auto_examine": true, 00:19:46.326 "iobuf_small_cache_size": 128, 00:19:46.326 "iobuf_large_cache_size": 16 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_raid_set_options", 00:19:46.326 "params": { 00:19:46.326 "process_window_size_kb": 1024, 00:19:46.326 "process_max_bandwidth_mb_sec": 0 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_iscsi_set_options", 00:19:46.326 "params": { 00:19:46.326 "timeout_sec": 30 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_nvme_set_options", 00:19:46.326 "params": { 00:19:46.326 "action_on_timeout": "none", 00:19:46.326 "timeout_us": 0, 00:19:46.326 "timeout_admin_us": 0, 00:19:46.326 "keep_alive_timeout_ms": 10000, 00:19:46.326 "arbitration_burst": 0, 00:19:46.326 "low_priority_weight": 0, 00:19:46.326 "medium_priority_weight": 0, 00:19:46.326 "high_priority_weight": 0, 00:19:46.326 "nvme_adminq_poll_period_us": 10000, 00:19:46.326 "nvme_ioq_poll_period_us": 0, 00:19:46.326 "io_queue_requests": 512, 00:19:46.326 "delay_cmd_submit": true, 00:19:46.326 "transport_retry_count": 4, 00:19:46.326 "bdev_retry_count": 3, 00:19:46.326 "transport_ack_timeout": 0, 00:19:46.326 "ctrlr_loss_timeout_sec": 0, 00:19:46.326 "reconnect_delay_sec": 0, 00:19:46.326 "fast_io_fail_timeout_sec": 0, 00:19:46.326 "disable_auto_failback": false, 00:19:46.326 "generate_uuids": false, 00:19:46.326 "transport_tos": 0, 00:19:46.326 "nvme_error_stat": false, 00:19:46.326 "rdma_srq_size": 0, 00:19:46.326 "io_path_stat": false, 00:19:46.326 "allow_accel_sequence": false, 00:19:46.326 "rdma_max_cq_size": 0, 00:19:46.326 "rdma_cm_event_timeout_ms": 0, 00:19:46.326 "dhchap_digests": [ 00:19:46.326 "sha256", 00:19:46.326 "sha384", 00:19:46.326 "sha512" 00:19:46.326 ], 00:19:46.326 "dhchap_dhgroups": [ 00:19:46.326 "null", 00:19:46.326 "ffdhe2048", 00:19:46.326 "ffdhe3072", 00:19:46.326 "ffdhe4096", 00:19:46.326 "ffdhe6144", 00:19:46.326 "ffdhe8192" 00:19:46.326 ] 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_nvme_attach_controller", 00:19:46.326 "params": { 00:19:46.326 "name": "nvme0", 00:19:46.326 "trtype": "TCP", 00:19:46.326 "adrfam": "IPv4", 00:19:46.326 "traddr": "10.0.0.2", 00:19:46.326 "trsvcid": "4420", 00:19:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.326 "prchk_reftag": false, 00:19:46.326 "prchk_guard": false, 00:19:46.326 "ctrlr_loss_timeout_sec": 0, 00:19:46.326 "reconnect_delay_sec": 0, 00:19:46.326 "fast_io_fail_timeout_sec": 0, 00:19:46.326 "psk": "key0", 00:19:46.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.326 "hdgst": false, 00:19:46.326 "ddgst": false, 00:19:46.326 "multipath": "multipath" 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_nvme_set_hotplug", 00:19:46.326 "params": { 00:19:46.326 "period_us": 100000, 00:19:46.326 "enable": false 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_enable_histogram", 00:19:46.326 "params": { 00:19:46.326 "name": "nvme0n1", 00:19:46.326 "enable": true 00:19:46.326 } 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "method": "bdev_wait_for_examine" 00:19:46.326 } 00:19:46.326 ] 00:19:46.326 }, 00:19:46.326 { 00:19:46.326 "subsystem": "nbd", 00:19:46.326 "config": [] 00:19:46.326 } 00:19:46.326 ] 00:19:46.326 }' 00:19:46.326 [2024-12-05 14:09:52.446773] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:46.326 [2024-12-05 14:09:52.446823] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751476 ] 00:19:46.326 [2024-12-05 14:09:52.531348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.326 [2024-12-05 14:09:52.560895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.586 [2024-12-05 14:09:52.696684] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.155 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.155 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:47.155 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:47.155 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:47.155 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.155 14:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.415 Running I/O for 1 seconds... 00:19:48.356 4148.00 IOPS, 16.20 MiB/s 00:19:48.356 Latency(us) 00:19:48.356 [2024-12-05T13:09:54.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.356 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:48.356 Verification LBA range: start 0x0 length 0x2000 00:19:48.356 nvme0n1 : 1.01 4225.48 16.51 0.00 0.00 30113.46 4860.59 74274.13 00:19:48.356 [2024-12-05T13:09:54.656Z] =================================================================================================================== 00:19:48.356 [2024-12-05T13:09:54.656Z] Total : 4225.48 16.51 0.00 0.00 30113.46 4860.59 74274.13 00:19:48.356 { 00:19:48.356 "results": [ 00:19:48.356 { 00:19:48.356 "job": "nvme0n1", 00:19:48.356 "core_mask": "0x2", 00:19:48.356 "workload": "verify", 00:19:48.356 "status": "finished", 00:19:48.356 "verify_range": { 00:19:48.356 "start": 0, 00:19:48.356 "length": 8192 00:19:48.356 }, 00:19:48.356 "queue_depth": 128, 00:19:48.356 "io_size": 4096, 00:19:48.356 "runtime": 1.011956, 00:19:48.356 "iops": 4225.4801592164085, 00:19:48.356 "mibps": 16.505781871939096, 00:19:48.356 "io_failed": 0, 00:19:48.356 "io_timeout": 0, 00:19:48.356 "avg_latency_us": 30113.46439663237, 00:19:48.356 "min_latency_us": 4860.586666666667, 00:19:48.356 "max_latency_us": 74274.13333333333 00:19:48.356 } 00:19:48.356 ], 00:19:48.356 "core_count": 1 00:19:48.356 } 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:48.356 nvmf_trace.0 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2751476 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2751476 ']' 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2751476 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.356 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751476 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751476' 00:19:48.617 killing process with pid 2751476 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2751476 00:19:48.617 Received shutdown signal, test time was about 1.000000 seconds 00:19:48.617 00:19:48.617 Latency(us) 00:19:48.617 [2024-12-05T13:09:54.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.617 [2024-12-05T13:09:54.917Z] =================================================================================================================== 00:19:48.617 [2024-12-05T13:09:54.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2751476 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:48.617 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:48.618 rmmod nvme_tcp 00:19:48.618 rmmod nvme_fabrics 00:19:48.618 rmmod nvme_keyring 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2751273 ']' 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2751273 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2751273 ']' 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2751273 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2751273 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.618 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.878 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2751273' 00:19:48.878 killing process with pid 2751273 00:19:48.878 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2751273 00:19:48.878 14:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2751273 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:48.878 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:48.879 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:48.879 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:48.879 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.879 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.879 14:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.zb8TpikKyg /tmp/tmp.5OcaItJg6R /tmp/tmp.H4yWLgwwNm 00:19:51.423 00:19:51.423 real 1m26.618s 00:19:51.423 user 2m17.010s 00:19:51.423 sys 0m26.916s 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.423 ************************************ 00:19:51.423 END TEST nvmf_tls 00:19:51.423 ************************************ 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.423 14:09:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:51.423 ************************************ 00:19:51.424 START TEST nvmf_fips 00:19:51.424 ************************************ 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:51.424 * Looking for test storage... 00:19:51.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:51.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.424 --rc genhtml_branch_coverage=1 00:19:51.424 --rc genhtml_function_coverage=1 00:19:51.424 --rc genhtml_legend=1 00:19:51.424 --rc geninfo_all_blocks=1 00:19:51.424 --rc geninfo_unexecuted_blocks=1 00:19:51.424 00:19:51.424 ' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:51.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.424 --rc genhtml_branch_coverage=1 00:19:51.424 --rc genhtml_function_coverage=1 00:19:51.424 --rc genhtml_legend=1 00:19:51.424 --rc geninfo_all_blocks=1 00:19:51.424 --rc geninfo_unexecuted_blocks=1 00:19:51.424 00:19:51.424 ' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:51.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.424 --rc genhtml_branch_coverage=1 00:19:51.424 --rc genhtml_function_coverage=1 00:19:51.424 --rc genhtml_legend=1 00:19:51.424 --rc geninfo_all_blocks=1 00:19:51.424 --rc geninfo_unexecuted_blocks=1 00:19:51.424 00:19:51.424 ' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:51.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:51.424 --rc genhtml_branch_coverage=1 00:19:51.424 --rc genhtml_function_coverage=1 00:19:51.424 --rc genhtml_legend=1 00:19:51.424 --rc geninfo_all_blocks=1 00:19:51.424 --rc geninfo_unexecuted_blocks=1 00:19:51.424 00:19:51.424 ' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:51.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:51.424 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:51.425 Error setting digest 00:19:51.425 40629737357F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:51.425 40629737357F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:51.425 14:09:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:59.568 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:59.568 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:59.569 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:59.569 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:59.569 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.569 14:10:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:19:59.569 00:19:59.569 --- 10.0.0.2 ping statistics --- 00:19:59.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.569 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:19:59.569 00:19:59.569 --- 10.0.0.1 ping statistics --- 00:19:59.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.569 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2756174 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2756174 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2756174 ']' 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.569 14:10:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:59.569 [2024-12-05 14:10:05.200671] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:19:59.569 [2024-12-05 14:10:05.200745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.569 [2024-12-05 14:10:05.301063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.569 [2024-12-05 14:10:05.350746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.569 [2024-12-05 14:10:05.350797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.569 [2024-12-05 14:10:05.350806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.569 [2024-12-05 14:10:05.350813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.569 [2024-12-05 14:10:05.350820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.569 [2024-12-05 14:10:05.351576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Qk3 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Qk3 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Qk3 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Qk3 00:19:59.831 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:00.092 [2024-12-05 14:10:06.234622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.092 [2024-12-05 14:10:06.250672] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.092 [2024-12-05 14:10:06.251001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.092 malloc0 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2756528 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2756528 /var/tmp/bdevperf.sock 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2756528 ']' 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.092 14:10:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:00.362 [2024-12-05 14:10:06.393910] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:20:00.362 [2024-12-05 14:10:06.393984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756528 ] 00:20:00.362 [2024-12-05 14:10:06.488549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.362 [2024-12-05 14:10:06.539372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.931 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.932 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:00.932 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Qk3 00:20:01.193 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.453 [2024-12-05 14:10:07.515847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.453 TLSTESTn1 00:20:01.453 14:10:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.453 Running I/O for 10 seconds... 00:20:03.476 4094.00 IOPS, 15.99 MiB/s [2024-12-05T13:10:10.720Z] 4277.50 IOPS, 16.71 MiB/s [2024-12-05T13:10:12.102Z] 4502.00 IOPS, 17.59 MiB/s [2024-12-05T13:10:13.041Z] 4905.75 IOPS, 19.16 MiB/s [2024-12-05T13:10:13.981Z] 5122.00 IOPS, 20.01 MiB/s [2024-12-05T13:10:14.920Z] 5031.67 IOPS, 19.65 MiB/s [2024-12-05T13:10:15.858Z] 5010.29 IOPS, 19.57 MiB/s [2024-12-05T13:10:16.800Z] 5129.88 IOPS, 20.04 MiB/s [2024-12-05T13:10:17.741Z] 5220.44 IOPS, 20.39 MiB/s [2024-12-05T13:10:18.002Z] 5098.50 IOPS, 19.92 MiB/s 00:20:11.702 Latency(us) 00:20:11.702 [2024-12-05T13:10:18.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.702 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:11.702 Verification LBA range: start 0x0 length 0x2000 00:20:11.702 TLSTESTn1 : 10.02 5100.90 19.93 0.00 0.00 25054.08 5242.88 54176.43 00:20:11.702 [2024-12-05T13:10:18.002Z] =================================================================================================================== 00:20:11.702 [2024-12-05T13:10:18.002Z] Total : 5100.90 19.93 0.00 0.00 25054.08 5242.88 54176.43 00:20:11.702 { 00:20:11.702 "results": [ 00:20:11.702 { 00:20:11.702 "job": "TLSTESTn1", 00:20:11.702 "core_mask": "0x4", 00:20:11.702 "workload": "verify", 00:20:11.702 "status": "finished", 00:20:11.702 "verify_range": { 00:20:11.702 "start": 0, 00:20:11.702 "length": 8192 00:20:11.702 }, 00:20:11.702 "queue_depth": 128, 00:20:11.702 "io_size": 4096, 00:20:11.702 "runtime": 10.020388, 00:20:11.702 "iops": 5100.900284499961, 00:20:11.702 "mibps": 19.925391736327974, 00:20:11.702 "io_failed": 0, 00:20:11.702 "io_timeout": 0, 00:20:11.702 "avg_latency_us": 25054.077424790823, 00:20:11.702 "min_latency_us": 5242.88, 00:20:11.702 "max_latency_us": 54176.426666666666 00:20:11.702 } 00:20:11.702 ], 00:20:11.702 "core_count": 1 00:20:11.702 } 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:11.702 nvmf_trace.0 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2756528 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2756528 ']' 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2756528 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756528 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756528' 00:20:11.702 killing process with pid 2756528 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2756528 00:20:11.702 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.702 00:20:11.702 Latency(us) 00:20:11.702 [2024-12-05T13:10:18.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.702 [2024-12-05T13:10:18.002Z] =================================================================================================================== 00:20:11.702 [2024-12-05T13:10:18.002Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:11.702 14:10:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2756528 00:20:11.962 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.963 rmmod nvme_tcp 00:20:11.963 rmmod nvme_fabrics 00:20:11.963 rmmod nvme_keyring 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2756174 ']' 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2756174 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2756174 ']' 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2756174 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2756174 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2756174' 00:20:11.963 killing process with pid 2756174 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2756174 00:20:11.963 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2756174 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.224 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.225 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.225 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:12.225 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.225 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.225 14:10:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.134 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:14.134 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Qk3 00:20:14.134 00:20:14.134 real 0m23.179s 00:20:14.134 user 0m24.721s 00:20:14.134 sys 0m9.752s 00:20:14.134 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.134 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:14.134 ************************************ 00:20:14.134 END TEST nvmf_fips 00:20:14.134 ************************************ 00:20:14.134 14:10:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:14.134 14:10:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:14.135 14:10:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.135 14:10:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.395 ************************************ 00:20:14.395 START TEST nvmf_control_msg_list 00:20:14.395 ************************************ 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:14.395 * Looking for test storage... 00:20:14.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.395 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:14.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.396 --rc genhtml_branch_coverage=1 00:20:14.396 --rc genhtml_function_coverage=1 00:20:14.396 --rc genhtml_legend=1 00:20:14.396 --rc geninfo_all_blocks=1 00:20:14.396 --rc geninfo_unexecuted_blocks=1 00:20:14.396 00:20:14.396 ' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:14.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.396 --rc genhtml_branch_coverage=1 00:20:14.396 --rc genhtml_function_coverage=1 00:20:14.396 --rc genhtml_legend=1 00:20:14.396 --rc geninfo_all_blocks=1 00:20:14.396 --rc geninfo_unexecuted_blocks=1 00:20:14.396 00:20:14.396 ' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:14.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.396 --rc genhtml_branch_coverage=1 00:20:14.396 --rc genhtml_function_coverage=1 00:20:14.396 --rc genhtml_legend=1 00:20:14.396 --rc geninfo_all_blocks=1 00:20:14.396 --rc geninfo_unexecuted_blocks=1 00:20:14.396 00:20:14.396 ' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:14.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.396 --rc genhtml_branch_coverage=1 00:20:14.396 --rc genhtml_function_coverage=1 00:20:14.396 --rc genhtml_legend=1 00:20:14.396 --rc geninfo_all_blocks=1 00:20:14.396 --rc geninfo_unexecuted_blocks=1 00:20:14.396 00:20:14.396 ' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.396 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.656 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:14.656 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:14.656 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:14.656 14:10:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:22.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:22.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:22.806 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:22.807 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:22.807 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:22.807 14:10:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:22.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:20:22.807 00:20:22.807 --- 10.0.0.2 ping statistics --- 00:20:22.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.807 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:22.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:20:22.807 00:20:22.807 --- 10.0.0.1 ping statistics --- 00:20:22.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.807 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2762902 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2762902 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2762902 ']' 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.807 14:10:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:22.807 [2024-12-05 14:10:28.249736] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:20:22.807 [2024-12-05 14:10:28.249803] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.807 [2024-12-05 14:10:28.349870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.807 [2024-12-05 14:10:28.400558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.807 [2024-12-05 14:10:28.400605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.807 [2024-12-05 14:10:28.400613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.807 [2024-12-05 14:10:28.400621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.807 [2024-12-05 14:10:28.400627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.807 [2024-12-05 14:10:28.401407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.807 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:22.807 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:22.807 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:22.807 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.807 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.069 [2024-12-05 14:10:29.132803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.069 Malloc0 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:23.069 [2024-12-05 14:10:29.187288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2763209 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2763211 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2763213 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2763209 00:20:23.069 14:10:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:23.069 [2024-12-05 14:10:29.288452] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:23.069 [2024-12-05 14:10:29.288804] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:23.069 [2024-12-05 14:10:29.289193] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:24.453 Initializing NVMe Controllers 00:20:24.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:24.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:24.453 Initialization complete. Launching workers. 00:20:24.453 ======================================================== 00:20:24.453 Latency(us) 00:20:24.453 Device Information : IOPS MiB/s Average min max 00:20:24.453 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1697.00 6.63 589.16 151.61 855.06 00:20:24.453 ======================================================== 00:20:24.453 Total : 1697.00 6.63 589.16 151.61 855.06 00:20:24.453 00:20:24.453 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2763211 00:20:24.453 Initializing NVMe Controllers 00:20:24.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:24.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:24.453 Initialization complete. Launching workers. 00:20:24.453 ======================================================== 00:20:24.453 Latency(us) 00:20:24.454 Device Information : IOPS MiB/s Average min max 00:20:24.454 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40923.03 40814.71 41431.52 00:20:24.454 ======================================================== 00:20:24.454 Total : 25.00 0.10 40923.03 40814.71 41431.52 00:20:24.454 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2763213 00:20:24.454 Initializing NVMe Controllers 00:20:24.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:24.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:24.454 Initialization complete. Launching workers. 00:20:24.454 ======================================================== 00:20:24.454 Latency(us) 00:20:24.454 Device Information : IOPS MiB/s Average min max 00:20:24.454 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40907.40 40812.41 41007.34 00:20:24.454 ======================================================== 00:20:24.454 Total : 25.00 0.10 40907.40 40812.41 41007.34 00:20:24.454 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:24.454 rmmod nvme_tcp 00:20:24.454 rmmod nvme_fabrics 00:20:24.454 rmmod nvme_keyring 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2762902 ']' 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2762902 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2762902 ']' 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2762902 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762902 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762902' 00:20:24.454 killing process with pid 2762902 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2762902 00:20:24.454 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2762902 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:24.714 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:24.715 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.715 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.715 14:10:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.257 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:27.257 00:20:27.257 real 0m12.495s 00:20:27.257 user 0m8.229s 00:20:27.258 sys 0m6.484s 00:20:27.258 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.258 14:10:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:27.258 ************************************ 00:20:27.258 END TEST nvmf_control_msg_list 00:20:27.258 ************************************ 00:20:27.258 14:10:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:27.258 14:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.258 14:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.258 14:10:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.258 ************************************ 00:20:27.258 START TEST nvmf_wait_for_buf 00:20:27.258 ************************************ 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:27.258 * Looking for test storage... 00:20:27.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.258 --rc genhtml_branch_coverage=1 00:20:27.258 --rc genhtml_function_coverage=1 00:20:27.258 --rc genhtml_legend=1 00:20:27.258 --rc geninfo_all_blocks=1 00:20:27.258 --rc geninfo_unexecuted_blocks=1 00:20:27.258 00:20:27.258 ' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.258 --rc genhtml_branch_coverage=1 00:20:27.258 --rc genhtml_function_coverage=1 00:20:27.258 --rc genhtml_legend=1 00:20:27.258 --rc geninfo_all_blocks=1 00:20:27.258 --rc geninfo_unexecuted_blocks=1 00:20:27.258 00:20:27.258 ' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.258 --rc genhtml_branch_coverage=1 00:20:27.258 --rc genhtml_function_coverage=1 00:20:27.258 --rc genhtml_legend=1 00:20:27.258 --rc geninfo_all_blocks=1 00:20:27.258 --rc geninfo_unexecuted_blocks=1 00:20:27.258 00:20:27.258 ' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:27.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.258 --rc genhtml_branch_coverage=1 00:20:27.258 --rc genhtml_function_coverage=1 00:20:27.258 --rc genhtml_legend=1 00:20:27.258 --rc geninfo_all_blocks=1 00:20:27.258 --rc geninfo_unexecuted_blocks=1 00:20:27.258 00:20:27.258 ' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.258 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.259 14:10:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:35.401 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:35.401 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.401 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:35.402 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:35.402 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:35.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:20:35.402 00:20:35.402 --- 10.0.0.2 ping statistics --- 00:20:35.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.402 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:20:35.402 00:20:35.402 --- 10.0.0.1 ping statistics --- 00:20:35.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.402 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2767605 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2767605 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2767605 ']' 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.402 14:10:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.402 [2024-12-05 14:10:40.868462] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:20:35.402 [2024-12-05 14:10:40.868532] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.402 [2024-12-05 14:10:40.970525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.402 [2024-12-05 14:10:41.021087] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.402 [2024-12-05 14:10:41.021141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.402 [2024-12-05 14:10:41.021149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.402 [2024-12-05 14:10:41.021157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.402 [2024-12-05 14:10:41.021164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.402 [2024-12-05 14:10:41.021985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.402 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.402 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:35.402 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.402 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.402 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 Malloc0 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 [2024-12-05 14:10:41.855960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:35.663 [2024-12-05 14:10:41.892302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.663 14:10:41 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:35.923 [2024-12-05 14:10:41.997578] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:37.305 Initializing NVMe Controllers 00:20:37.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:37.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:37.305 Initialization complete. Launching workers. 00:20:37.305 ======================================================== 00:20:37.305 Latency(us) 00:20:37.305 Device Information : IOPS MiB/s Average min max 00:20:37.305 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.79 15.97 32422.43 7004.60 63860.14 00:20:37.305 ======================================================== 00:20:37.305 Total : 127.79 15.97 32422.43 7004.60 63860.14 00:20:37.305 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.305 rmmod nvme_tcp 00:20:37.305 rmmod nvme_fabrics 00:20:37.305 rmmod nvme_keyring 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2767605 ']' 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2767605 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2767605 ']' 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2767605 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.305 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2767605 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2767605' 00:20:37.566 killing process with pid 2767605 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2767605 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2767605 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.566 14:10:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.111 00:20:40.111 real 0m12.827s 00:20:40.111 user 0m5.173s 00:20:40.111 sys 0m6.244s 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:40.111 ************************************ 00:20:40.111 END TEST nvmf_wait_for_buf 00:20:40.111 ************************************ 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.111 14:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:46.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:46.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:46.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:46.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:46.702 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.963 14:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.963 ************************************ 00:20:46.963 START TEST nvmf_perf_adq 00:20:46.963 ************************************ 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:46.963 * Looking for test storage... 00:20:46.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:46.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.963 --rc genhtml_branch_coverage=1 00:20:46.963 --rc genhtml_function_coverage=1 00:20:46.963 --rc genhtml_legend=1 00:20:46.963 --rc geninfo_all_blocks=1 00:20:46.963 --rc geninfo_unexecuted_blocks=1 00:20:46.963 00:20:46.963 ' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:46.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.963 --rc genhtml_branch_coverage=1 00:20:46.963 --rc genhtml_function_coverage=1 00:20:46.963 --rc genhtml_legend=1 00:20:46.963 --rc geninfo_all_blocks=1 00:20:46.963 --rc geninfo_unexecuted_blocks=1 00:20:46.963 00:20:46.963 ' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:46.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.963 --rc genhtml_branch_coverage=1 00:20:46.963 --rc genhtml_function_coverage=1 00:20:46.963 --rc genhtml_legend=1 00:20:46.963 --rc geninfo_all_blocks=1 00:20:46.963 --rc geninfo_unexecuted_blocks=1 00:20:46.963 00:20:46.963 ' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:46.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.963 --rc genhtml_branch_coverage=1 00:20:46.963 --rc genhtml_function_coverage=1 00:20:46.963 --rc genhtml_legend=1 00:20:46.963 --rc geninfo_all_blocks=1 00:20:46.963 --rc geninfo_unexecuted_blocks=1 00:20:46.963 00:20:46.963 ' 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.963 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.964 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.226 14:10:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:55.362 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:55.362 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:55.362 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:55.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:55.362 14:11:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:55.934 14:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:57.852 14:11:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:03.139 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:03.139 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:03.139 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:03.139 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.139 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.140 14:11:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:21:03.140 00:21:03.140 --- 10.0.0.2 ping statistics --- 00:21:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.140 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:21:03.140 00:21:03.140 --- 10.0.0.1 ping statistics --- 00:21:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.140 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2777835 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2777835 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2777835 ']' 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.140 14:11:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.140 [2024-12-05 14:11:09.256929] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:03.140 [2024-12-05 14:11:09.256994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.140 [2024-12-05 14:11:09.355708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.140 [2024-12-05 14:11:09.409840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.140 [2024-12-05 14:11:09.409889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.140 [2024-12-05 14:11:09.409898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.140 [2024-12-05 14:11:09.409905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.140 [2024-12-05 14:11:09.409911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.140 [2024-12-05 14:11:09.412321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.140 [2024-12-05 14:11:09.412500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.140 [2024-12-05 14:11:09.412599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.140 [2024-12-05 14:11:09.412600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 [2024-12-05 14:11:10.278888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 Malloc1 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.080 [2024-12-05 14:11:10.359686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2777987 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:04.080 14:11:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:06.625 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:06.625 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.625 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.625 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.625 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:06.625 "tick_rate": 2400000000, 00:21:06.625 "poll_groups": [ 00:21:06.625 { 00:21:06.625 "name": "nvmf_tgt_poll_group_000", 00:21:06.625 "admin_qpairs": 1, 00:21:06.625 "io_qpairs": 1, 00:21:06.625 "current_admin_qpairs": 1, 00:21:06.625 "current_io_qpairs": 1, 00:21:06.625 "pending_bdev_io": 0, 00:21:06.625 "completed_nvme_io": 16026, 00:21:06.625 "transports": [ 00:21:06.625 { 00:21:06.625 "trtype": "TCP" 00:21:06.625 } 00:21:06.625 ] 00:21:06.625 }, 00:21:06.625 { 00:21:06.625 "name": "nvmf_tgt_poll_group_001", 00:21:06.625 "admin_qpairs": 0, 00:21:06.625 "io_qpairs": 1, 00:21:06.625 "current_admin_qpairs": 0, 00:21:06.626 "current_io_qpairs": 1, 00:21:06.626 "pending_bdev_io": 0, 00:21:06.626 "completed_nvme_io": 16063, 00:21:06.626 "transports": [ 00:21:06.626 { 00:21:06.626 "trtype": "TCP" 00:21:06.626 } 00:21:06.626 ] 00:21:06.626 }, 00:21:06.626 { 00:21:06.626 "name": "nvmf_tgt_poll_group_002", 00:21:06.626 "admin_qpairs": 0, 00:21:06.626 "io_qpairs": 1, 00:21:06.626 "current_admin_qpairs": 0, 00:21:06.626 "current_io_qpairs": 1, 00:21:06.626 "pending_bdev_io": 0, 00:21:06.626 "completed_nvme_io": 16268, 00:21:06.626 "transports": [ 00:21:06.626 { 00:21:06.626 "trtype": "TCP" 00:21:06.626 } 00:21:06.626 ] 00:21:06.626 }, 00:21:06.626 { 00:21:06.626 "name": "nvmf_tgt_poll_group_003", 00:21:06.626 "admin_qpairs": 0, 00:21:06.626 "io_qpairs": 1, 00:21:06.626 "current_admin_qpairs": 0, 00:21:06.626 "current_io_qpairs": 1, 00:21:06.626 "pending_bdev_io": 0, 00:21:06.626 "completed_nvme_io": 15996, 00:21:06.626 "transports": [ 00:21:06.626 { 00:21:06.626 "trtype": "TCP" 00:21:06.626 } 00:21:06.626 ] 00:21:06.626 } 00:21:06.626 ] 00:21:06.626 }' 00:21:06.626 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:06.626 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:06.626 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:06.626 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:06.626 14:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2777987 00:21:14.768 Initializing NVMe Controllers 00:21:14.768 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:14.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:14.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:14.768 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:14.768 Initialization complete. Launching workers. 00:21:14.768 ======================================================== 00:21:14.768 Latency(us) 00:21:14.768 Device Information : IOPS MiB/s Average min max 00:21:14.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12330.46 48.17 5191.31 1620.44 13667.55 00:21:14.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12828.16 50.11 4988.52 1211.69 12963.89 00:21:14.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13288.76 51.91 4816.82 1114.01 12606.39 00:21:14.768 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12649.56 49.41 5059.19 1303.58 12451.89 00:21:14.768 ======================================================== 00:21:14.768 Total : 51096.94 199.60 5010.30 1114.01 13667.55 00:21:14.768 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.768 rmmod nvme_tcp 00:21:14.768 rmmod nvme_fabrics 00:21:14.768 rmmod nvme_keyring 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2777835 ']' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2777835 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2777835 ']' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2777835 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2777835 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2777835' 00:21:14.768 killing process with pid 2777835 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2777835 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2777835 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.768 14:11:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.776 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.776 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:16.776 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:16.776 14:11:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:18.184 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:20.094 14:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:25.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:25.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:25.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.383 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:25.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:21:25.384 00:21:25.384 --- 10.0.0.2 ping statistics --- 00:21:25.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.384 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:21:25.384 00:21:25.384 --- 10.0.0.1 ping statistics --- 00:21:25.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.384 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:25.384 net.core.busy_poll = 1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:25.384 net.core.busy_read = 1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:25.384 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2782469 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2782469 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2782469 ']' 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.645 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.646 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.646 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.646 14:11:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.646 [2024-12-05 14:11:31.759408] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:25.646 [2024-12-05 14:11:31.759488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.646 [2024-12-05 14:11:31.860301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.646 [2024-12-05 14:11:31.913504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.646 [2024-12-05 14:11:31.913556] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.646 [2024-12-05 14:11:31.913565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.646 [2024-12-05 14:11:31.913572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.646 [2024-12-05 14:11:31.913578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.646 [2024-12-05 14:11:31.915671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.646 [2024-12-05 14:11:31.915832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.646 [2024-12-05 14:11:31.915995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.646 [2024-12-05 14:11:31.915995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 [2024-12-05 14:11:32.781562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 Malloc1 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.586 [2024-12-05 14:11:32.855314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2782695 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:26.586 14:11:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:29.128 "tick_rate": 2400000000, 00:21:29.128 "poll_groups": [ 00:21:29.128 { 00:21:29.128 "name": "nvmf_tgt_poll_group_000", 00:21:29.128 "admin_qpairs": 1, 00:21:29.128 "io_qpairs": 3, 00:21:29.128 "current_admin_qpairs": 1, 00:21:29.128 "current_io_qpairs": 3, 00:21:29.128 "pending_bdev_io": 0, 00:21:29.128 "completed_nvme_io": 28945, 00:21:29.128 "transports": [ 00:21:29.128 { 00:21:29.128 "trtype": "TCP" 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "name": "nvmf_tgt_poll_group_001", 00:21:29.128 "admin_qpairs": 0, 00:21:29.128 "io_qpairs": 1, 00:21:29.128 "current_admin_qpairs": 0, 00:21:29.128 "current_io_qpairs": 1, 00:21:29.128 "pending_bdev_io": 0, 00:21:29.128 "completed_nvme_io": 25541, 00:21:29.128 "transports": [ 00:21:29.128 { 00:21:29.128 "trtype": "TCP" 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "name": "nvmf_tgt_poll_group_002", 00:21:29.128 "admin_qpairs": 0, 00:21:29.128 "io_qpairs": 0, 00:21:29.128 "current_admin_qpairs": 0, 00:21:29.128 "current_io_qpairs": 0, 00:21:29.128 "pending_bdev_io": 0, 00:21:29.128 "completed_nvme_io": 0, 00:21:29.128 "transports": [ 00:21:29.128 { 00:21:29.128 "trtype": "TCP" 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }, 00:21:29.128 { 00:21:29.128 "name": "nvmf_tgt_poll_group_003", 00:21:29.128 "admin_qpairs": 0, 00:21:29.128 "io_qpairs": 0, 00:21:29.128 "current_admin_qpairs": 0, 00:21:29.128 "current_io_qpairs": 0, 00:21:29.128 "pending_bdev_io": 0, 00:21:29.128 "completed_nvme_io": 0, 00:21:29.128 "transports": [ 00:21:29.128 { 00:21:29.128 "trtype": "TCP" 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 } 00:21:29.128 ] 00:21:29.128 }' 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:29.128 14:11:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2782695 00:21:37.261 Initializing NVMe Controllers 00:21:37.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:37.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:37.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:37.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:37.261 Initialization complete. Launching workers. 00:21:37.261 ======================================================== 00:21:37.261 Latency(us) 00:21:37.261 Device Information : IOPS MiB/s Average min max 00:21:37.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6627.60 25.89 9668.36 1312.38 60218.20 00:21:37.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 17562.60 68.60 3654.92 902.41 46128.09 00:21:37.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6751.70 26.37 9488.88 1156.01 58006.63 00:21:37.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6836.20 26.70 9372.71 1173.58 55797.58 00:21:37.261 ======================================================== 00:21:37.261 Total : 37778.10 147.57 6787.21 902.41 60218.20 00:21:37.261 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.261 rmmod nvme_tcp 00:21:37.261 rmmod nvme_fabrics 00:21:37.261 rmmod nvme_keyring 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2782469 ']' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2782469 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2782469 ']' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2782469 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2782469 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2782469' 00:21:37.261 killing process with pid 2782469 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2782469 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2782469 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.261 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:40.558 00:21:40.558 real 0m53.352s 00:21:40.558 user 2m50.083s 00:21:40.558 sys 0m11.235s 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:40.558 ************************************ 00:21:40.558 END TEST nvmf_perf_adq 00:21:40.558 ************************************ 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.558 ************************************ 00:21:40.558 START TEST nvmf_shutdown 00:21:40.558 ************************************ 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:40.558 * Looking for test storage... 00:21:40.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.558 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:40.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.558 --rc genhtml_branch_coverage=1 00:21:40.558 --rc genhtml_function_coverage=1 00:21:40.558 --rc genhtml_legend=1 00:21:40.558 --rc geninfo_all_blocks=1 00:21:40.559 --rc geninfo_unexecuted_blocks=1 00:21:40.559 00:21:40.559 ' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.559 --rc genhtml_branch_coverage=1 00:21:40.559 --rc genhtml_function_coverage=1 00:21:40.559 --rc genhtml_legend=1 00:21:40.559 --rc geninfo_all_blocks=1 00:21:40.559 --rc geninfo_unexecuted_blocks=1 00:21:40.559 00:21:40.559 ' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.559 --rc genhtml_branch_coverage=1 00:21:40.559 --rc genhtml_function_coverage=1 00:21:40.559 --rc genhtml_legend=1 00:21:40.559 --rc geninfo_all_blocks=1 00:21:40.559 --rc geninfo_unexecuted_blocks=1 00:21:40.559 00:21:40.559 ' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.559 --rc genhtml_branch_coverage=1 00:21:40.559 --rc genhtml_function_coverage=1 00:21:40.559 --rc genhtml_legend=1 00:21:40.559 --rc geninfo_all_blocks=1 00:21:40.559 --rc geninfo_unexecuted_blocks=1 00:21:40.559 00:21:40.559 ' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:40.559 ************************************ 00:21:40.559 START TEST nvmf_shutdown_tc1 00:21:40.559 ************************************ 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.559 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:40.560 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.700 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:48.701 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:48.701 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:48.701 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:48.701 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:21:48.701 00:21:48.701 --- 10.0.0.2 ping statistics --- 00:21:48.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.701 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:21:48.701 00:21:48.701 --- 10.0.0.1 ping statistics --- 00:21:48.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.701 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2789156 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2789156 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2789156 ']' 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.701 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.702 14:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.702 [2024-12-05 14:11:54.434031] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:48.702 [2024-12-05 14:11:54.434100] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.702 [2024-12-05 14:11:54.536195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.702 [2024-12-05 14:11:54.588569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.702 [2024-12-05 14:11:54.588621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.702 [2024-12-05 14:11:54.588629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.702 [2024-12-05 14:11:54.588637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.702 [2024-12-05 14:11:54.588643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.702 [2024-12-05 14:11:54.590703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.702 [2024-12-05 14:11:54.590865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.702 [2024-12-05 14:11:54.590991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.702 [2024-12-05 14:11:54.590993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.273 [2024-12-05 14:11:55.318280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.273 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.274 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.274 Malloc1 00:21:49.274 [2024-12-05 14:11:55.447098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.274 Malloc2 00:21:49.274 Malloc3 00:21:49.274 Malloc4 00:21:49.534 Malloc5 00:21:49.534 Malloc6 00:21:49.534 Malloc7 00:21:49.534 Malloc8 00:21:49.534 Malloc9 00:21:49.794 Malloc10 00:21:49.794 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.794 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:49.794 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2789534 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2789534 /var/tmp/bdevperf.sock 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2789534 ']' 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.795 [2024-12-05 14:11:55.961976] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:49.795 [2024-12-05 14:11:55.962047] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.795 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.795 { 00:21:49.795 "params": { 00:21:49.795 "name": "Nvme$subsystem", 00:21:49.795 "trtype": "$TEST_TRANSPORT", 00:21:49.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.795 "adrfam": "ipv4", 00:21:49.795 "trsvcid": "$NVMF_PORT", 00:21:49.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.795 "hdgst": ${hdgst:-false}, 00:21:49.795 "ddgst": ${ddgst:-false} 00:21:49.795 }, 00:21:49.795 "method": "bdev_nvme_attach_controller" 00:21:49.795 } 00:21:49.795 EOF 00:21:49.795 )") 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.796 { 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme$subsystem", 00:21:49.796 "trtype": "$TEST_TRANSPORT", 00:21:49.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "$NVMF_PORT", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.796 "hdgst": ${hdgst:-false}, 00:21:49.796 "ddgst": ${ddgst:-false} 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 } 00:21:49.796 EOF 00:21:49.796 )") 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.796 { 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme$subsystem", 00:21:49.796 "trtype": "$TEST_TRANSPORT", 00:21:49.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "$NVMF_PORT", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.796 "hdgst": ${hdgst:-false}, 00:21:49.796 "ddgst": ${ddgst:-false} 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 } 00:21:49.796 EOF 00:21:49.796 )") 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:49.796 { 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme$subsystem", 00:21:49.796 "trtype": "$TEST_TRANSPORT", 00:21:49.796 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "$NVMF_PORT", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.796 "hdgst": ${hdgst:-false}, 00:21:49.796 "ddgst": ${ddgst:-false} 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 } 00:21:49.796 EOF 00:21:49.796 )") 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:49.796 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:49.796 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme1", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme2", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme3", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme4", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme5", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme6", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme7", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme8", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.796 },{ 00:21:49.796 "params": { 00:21:49.796 "name": "Nvme9", 00:21:49.796 "trtype": "tcp", 00:21:49.796 "traddr": "10.0.0.2", 00:21:49.796 "adrfam": "ipv4", 00:21:49.796 "trsvcid": "4420", 00:21:49.796 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:49.796 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:49.796 "hdgst": false, 00:21:49.796 "ddgst": false 00:21:49.796 }, 00:21:49.796 "method": "bdev_nvme_attach_controller" 00:21:49.797 },{ 00:21:49.797 "params": { 00:21:49.797 "name": "Nvme10", 00:21:49.797 "trtype": "tcp", 00:21:49.797 "traddr": "10.0.0.2", 00:21:49.797 "adrfam": "ipv4", 00:21:49.797 "trsvcid": "4420", 00:21:49.797 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:49.797 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:49.797 "hdgst": false, 00:21:49.797 "ddgst": false 00:21:49.797 }, 00:21:49.797 "method": "bdev_nvme_attach_controller" 00:21:49.797 }' 00:21:49.797 [2024-12-05 14:11:56.056629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.056 [2024-12-05 14:11:56.109782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2789534 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:51.440 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:52.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2789534 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2789156 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 [2024-12-05 14:11:58.635341] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:52.385 [2024-12-05 14:11:58.635397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790126 ] 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.385 "ddgst": ${ddgst:-false} 00:21:52.385 }, 00:21:52.385 "method": "bdev_nvme_attach_controller" 00:21:52.385 } 00:21:52.385 EOF 00:21:52.385 )") 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:52.385 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:52.385 { 00:21:52.385 "params": { 00:21:52.385 "name": "Nvme$subsystem", 00:21:52.385 "trtype": "$TEST_TRANSPORT", 00:21:52.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:52.385 "adrfam": "ipv4", 00:21:52.385 "trsvcid": "$NVMF_PORT", 00:21:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:52.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:52.385 "hdgst": ${hdgst:-false}, 00:21:52.386 "ddgst": ${ddgst:-false} 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 } 00:21:52.386 EOF 00:21:52.386 )") 00:21:52.386 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:52.386 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:52.386 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:52.386 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme1", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme2", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme3", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme4", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme5", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme6", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme7", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme8", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme9", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 },{ 00:21:52.386 "params": { 00:21:52.386 "name": "Nvme10", 00:21:52.386 "trtype": "tcp", 00:21:52.386 "traddr": "10.0.0.2", 00:21:52.386 "adrfam": "ipv4", 00:21:52.386 "trsvcid": "4420", 00:21:52.386 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:52.386 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:52.386 "hdgst": false, 00:21:52.386 "ddgst": false 00:21:52.386 }, 00:21:52.386 "method": "bdev_nvme_attach_controller" 00:21:52.386 }' 00:21:52.647 [2024-12-05 14:11:58.726582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.647 [2024-12-05 14:11:58.762463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.032 Running I/O for 1 seconds... 00:21:55.239 1873.00 IOPS, 117.06 MiB/s 00:21:55.239 Latency(us) 00:21:55.239 [2024-12-05T13:12:01.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.239 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme1n1 : 1.16 221.58 13.85 0.00 0.00 285882.67 14964.05 251658.24 00:21:55.239 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme2n1 : 1.14 223.60 13.97 0.00 0.00 278156.16 17585.49 246415.36 00:21:55.239 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme3n1 : 1.13 231.07 14.44 0.00 0.00 258625.85 16711.68 228939.09 00:21:55.239 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme4n1 : 1.13 226.20 14.14 0.00 0.00 265146.67 17476.27 262144.00 00:21:55.239 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme5n1 : 1.14 223.83 13.99 0.00 0.00 263085.44 19223.89 246415.36 00:21:55.239 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme6n1 : 1.18 270.81 16.93 0.00 0.00 214094.34 18350.08 258648.75 00:21:55.239 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme7n1 : 1.15 222.89 13.93 0.00 0.00 255629.01 17257.81 249910.61 00:21:55.239 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme8n1 : 1.16 280.90 17.56 0.00 0.00 198668.81 4587.52 249910.61 00:21:55.239 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme9n1 : 1.18 274.40 17.15 0.00 0.00 199324.76 5079.04 237677.23 00:21:55.239 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:55.239 Verification LBA range: start 0x0 length 0x400 00:21:55.239 Nvme10n1 : 1.20 264.77 16.55 0.00 0.00 204562.23 7045.12 279620.27 00:21:55.239 [2024-12-05T13:12:01.539Z] =================================================================================================================== 00:21:55.239 [2024-12-05T13:12:01.539Z] Total : 2440.07 152.50 0.00 0.00 238760.25 4587.52 279620.27 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:55.239 rmmod nvme_tcp 00:21:55.239 rmmod nvme_fabrics 00:21:55.239 rmmod nvme_keyring 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:55.239 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2789156 ']' 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2789156 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2789156 ']' 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2789156 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.240 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2789156 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2789156' 00:21:55.501 killing process with pid 2789156 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2789156 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2789156 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.501 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.502 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.502 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.763 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.763 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.763 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.674 00:21:57.674 real 0m17.127s 00:21:57.674 user 0m34.995s 00:21:57.674 sys 0m7.012s 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:57.674 ************************************ 00:21:57.674 END TEST nvmf_shutdown_tc1 00:21:57.674 ************************************ 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:57.674 ************************************ 00:21:57.674 START TEST nvmf_shutdown_tc2 00:21:57.674 ************************************ 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.674 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.936 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:57.936 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:57.937 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:57.937 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:57.937 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:57.937 14:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.937 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.937 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.937 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.937 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:57.937 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:21:58.199 00:21:58.199 --- 10.0.0.2 ping statistics --- 00:21:58.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.199 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:21:58.199 00:21:58.199 --- 10.0.0.1 ping statistics --- 00:21:58.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.199 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2791450 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2791450 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2791450 ']' 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.199 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.199 [2024-12-05 14:12:04.400235] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:58.199 [2024-12-05 14:12:04.400301] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.199 [2024-12-05 14:12:04.496631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.459 [2024-12-05 14:12:04.530764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.459 [2024-12-05 14:12:04.530798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.459 [2024-12-05 14:12:04.530804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.459 [2024-12-05 14:12:04.530809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.459 [2024-12-05 14:12:04.530813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.459 [2024-12-05 14:12:04.532149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.459 [2024-12-05 14:12:04.532300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.459 [2024-12-05 14:12:04.532459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.459 [2024-12-05 14:12:04.532471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.029 [2024-12-05 14:12:05.248272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.029 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.294 Malloc1 00:21:59.294 [2024-12-05 14:12:05.360641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.294 Malloc2 00:21:59.294 Malloc3 00:21:59.294 Malloc4 00:21:59.294 Malloc5 00:21:59.294 Malloc6 00:21:59.294 Malloc7 00:21:59.555 Malloc8 00:21:59.555 Malloc9 00:21:59.555 Malloc10 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2791788 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2791788 /var/tmp/bdevperf.sock 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2791788 ']' 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.555 { 00:21:59.555 "params": { 00:21:59.555 "name": "Nvme$subsystem", 00:21:59.555 "trtype": "$TEST_TRANSPORT", 00:21:59.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.555 "adrfam": "ipv4", 00:21:59.555 "trsvcid": "$NVMF_PORT", 00:21:59.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.555 "hdgst": ${hdgst:-false}, 00:21:59.555 "ddgst": ${ddgst:-false} 00:21:59.555 }, 00:21:59.555 "method": "bdev_nvme_attach_controller" 00:21:59.555 } 00:21:59.555 EOF 00:21:59.555 )") 00:21:59.555 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 [2024-12-05 14:12:05.814102] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:21:59.556 [2024-12-05 14:12:05.814158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791788 ] 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:59.556 { 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme$subsystem", 00:21:59.556 "trtype": "$TEST_TRANSPORT", 00:21:59.556 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "$NVMF_PORT", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.556 "hdgst": ${hdgst:-false}, 00:21:59.556 "ddgst": ${ddgst:-false} 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 } 00:21:59.556 EOF 00:21:59.556 )") 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:59.556 14:12:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme1", 00:21:59.556 "trtype": "tcp", 00:21:59.556 "traddr": "10.0.0.2", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "4420", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.556 "hdgst": false, 00:21:59.556 "ddgst": false 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 },{ 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme2", 00:21:59.556 "trtype": "tcp", 00:21:59.556 "traddr": "10.0.0.2", 00:21:59.556 "adrfam": "ipv4", 00:21:59.556 "trsvcid": "4420", 00:21:59.556 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.556 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.556 "hdgst": false, 00:21:59.556 "ddgst": false 00:21:59.556 }, 00:21:59.556 "method": "bdev_nvme_attach_controller" 00:21:59.556 },{ 00:21:59.556 "params": { 00:21:59.556 "name": "Nvme3", 00:21:59.556 "trtype": "tcp", 00:21:59.556 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme4", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme5", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme6", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme7", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme8", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme9", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 },{ 00:21:59.557 "params": { 00:21:59.557 "name": "Nvme10", 00:21:59.557 "trtype": "tcp", 00:21:59.557 "traddr": "10.0.0.2", 00:21:59.557 "adrfam": "ipv4", 00:21:59.557 "trsvcid": "4420", 00:21:59.557 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.557 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.557 "hdgst": false, 00:21:59.557 "ddgst": false 00:21:59.557 }, 00:21:59.557 "method": "bdev_nvme_attach_controller" 00:21:59.557 }' 00:21:59.817 [2024-12-05 14:12:05.902898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.817 [2024-12-05 14:12:05.939297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.199 Running I/O for 10 seconds... 00:22:01.199 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.199 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:01.199 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.199 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.199 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:01.459 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:01.719 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.981 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2791788 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2791788 ']' 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2791788 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:01.982 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2791788 00:22:02.243 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.243 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.243 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2791788' 00:22:02.243 killing process with pid 2791788 00:22:02.243 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2791788 00:22:02.243 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2791788 00:22:02.243 Received shutdown signal, test time was about 0.971545 seconds 00:22:02.243 00:22:02.243 Latency(us) 00:22:02.243 [2024-12-05T13:12:08.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.243 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme1n1 : 0.96 266.94 16.68 0.00 0.00 236743.47 20643.84 242920.11 00:22:02.243 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme2n1 : 0.93 205.39 12.84 0.00 0.00 301340.16 18786.99 246415.36 00:22:02.243 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme3n1 : 0.96 265.80 16.61 0.00 0.00 228184.96 16493.23 239424.85 00:22:02.243 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme4n1 : 0.96 267.86 16.74 0.00 0.00 221505.28 17476.27 248162.99 00:22:02.243 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme5n1 : 0.97 269.23 16.83 0.00 0.00 215594.54 19879.25 241172.48 00:22:02.243 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme6n1 : 0.93 206.84 12.93 0.00 0.00 273349.97 18786.99 265639.25 00:22:02.243 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme7n1 : 0.97 263.74 16.48 0.00 0.00 210612.48 19114.67 242920.11 00:22:02.243 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme8n1 : 0.95 268.60 16.79 0.00 0.00 201501.44 18568.53 242920.11 00:22:02.243 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme9n1 : 0.94 203.25 12.70 0.00 0.00 259343.36 15619.41 248162.99 00:22:02.243 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:02.243 Verification LBA range: start 0x0 length 0x400 00:22:02.243 Nvme10n1 : 0.95 201.93 12.62 0.00 0.00 255002.45 17913.17 269134.51 00:22:02.243 [2024-12-05T13:12:08.543Z] =================================================================================================================== 00:22:02.243 [2024-12-05T13:12:08.543Z] Total : 2419.58 151.22 0.00 0.00 236732.09 15619.41 269134.51 00:22:02.243 14:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:03.184 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2791450 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.446 rmmod nvme_tcp 00:22:03.446 rmmod nvme_fabrics 00:22:03.446 rmmod nvme_keyring 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2791450 ']' 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2791450 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2791450 ']' 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2791450 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2791450 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2791450' 00:22:03.446 killing process with pid 2791450 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2791450 00:22:03.446 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2791450 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.708 14:12:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.624 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:05.887 00:22:05.887 real 0m7.965s 00:22:05.887 user 0m24.140s 00:22:05.887 sys 0m1.306s 00:22:05.887 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:05.887 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.887 ************************************ 00:22:05.887 END TEST nvmf_shutdown_tc2 00:22:05.887 ************************************ 00:22:05.887 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:05.887 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:05.887 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:05.887 14:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.887 ************************************ 00:22:05.887 START TEST nvmf_shutdown_tc3 00:22:05.887 ************************************ 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:05.887 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:05.888 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:05.888 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:05.888 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:05.888 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:05.888 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:06.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:22:06.150 00:22:06.150 --- 10.0.0.2 ping statistics --- 00:22:06.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.150 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:22:06.150 00:22:06.150 --- 10.0.0.1 ping statistics --- 00:22:06.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.150 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2793506 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2793506 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2793506 ']' 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.150 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.411 [2024-12-05 14:12:12.457149] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:06.411 [2024-12-05 14:12:12.457217] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.411 [2024-12-05 14:12:12.554183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.411 [2024-12-05 14:12:12.594694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.411 [2024-12-05 14:12:12.594733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.411 [2024-12-05 14:12:12.594739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.411 [2024-12-05 14:12:12.594744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.411 [2024-12-05 14:12:12.594748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.411 [2024-12-05 14:12:12.596262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.411 [2024-12-05 14:12:12.596420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.411 [2024-12-05 14:12:12.596598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.411 [2024-12-05 14:12:12.596744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.983 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.983 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:06.983 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:06.983 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:06.983 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.244 [2024-12-05 14:12:13.311636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.244 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.244 Malloc1 00:22:07.244 [2024-12-05 14:12:13.420425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.244 Malloc2 00:22:07.244 Malloc3 00:22:07.244 Malloc4 00:22:07.505 Malloc5 00:22:07.505 Malloc6 00:22:07.505 Malloc7 00:22:07.505 Malloc8 00:22:07.505 Malloc9 00:22:07.505 Malloc10 00:22:07.505 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.505 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:07.505 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:07.505 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2793822 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2793822 /var/tmp/bdevperf.sock 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2793822 ']' 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.767 "adrfam": "ipv4", 00:22:07.767 "trsvcid": "$NVMF_PORT", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.767 "hdgst": ${hdgst:-false}, 00:22:07.767 "ddgst": ${ddgst:-false} 00:22:07.767 }, 00:22:07.767 "method": "bdev_nvme_attach_controller" 00:22:07.767 } 00:22:07.767 EOF 00:22:07.767 )") 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.767 "adrfam": "ipv4", 00:22:07.767 "trsvcid": "$NVMF_PORT", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.767 "hdgst": ${hdgst:-false}, 00:22:07.767 "ddgst": ${ddgst:-false} 00:22:07.767 }, 00:22:07.767 "method": "bdev_nvme_attach_controller" 00:22:07.767 } 00:22:07.767 EOF 00:22:07.767 )") 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.767 "adrfam": "ipv4", 00:22:07.767 "trsvcid": "$NVMF_PORT", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.767 "hdgst": ${hdgst:-false}, 00:22:07.767 "ddgst": ${ddgst:-false} 00:22:07.767 }, 00:22:07.767 "method": "bdev_nvme_attach_controller" 00:22:07.767 } 00:22:07.767 EOF 00:22:07.767 )") 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.767 "adrfam": "ipv4", 00:22:07.767 "trsvcid": "$NVMF_PORT", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.767 "hdgst": ${hdgst:-false}, 00:22:07.767 "ddgst": ${ddgst:-false} 00:22:07.767 }, 00:22:07.767 "method": "bdev_nvme_attach_controller" 00:22:07.767 } 00:22:07.767 EOF 00:22:07.767 )") 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.767 "adrfam": "ipv4", 00:22:07.767 "trsvcid": "$NVMF_PORT", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.767 "hdgst": ${hdgst:-false}, 00:22:07.767 "ddgst": ${ddgst:-false} 00:22:07.767 }, 00:22:07.767 "method": "bdev_nvme_attach_controller" 00:22:07.767 } 00:22:07.767 EOF 00:22:07.767 )") 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.767 "adrfam": "ipv4", 00:22:07.767 "trsvcid": "$NVMF_PORT", 00:22:07.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.767 "hdgst": ${hdgst:-false}, 00:22:07.767 "ddgst": ${ddgst:-false} 00:22:07.767 }, 00:22:07.767 "method": "bdev_nvme_attach_controller" 00:22:07.767 } 00:22:07.767 EOF 00:22:07.767 )") 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.767 [2024-12-05 14:12:13.864309] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:07.767 [2024-12-05 14:12:13.864363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793822 ] 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.767 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.767 { 00:22:07.767 "params": { 00:22:07.767 "name": "Nvme$subsystem", 00:22:07.767 "trtype": "$TEST_TRANSPORT", 00:22:07.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "$NVMF_PORT", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.768 "hdgst": ${hdgst:-false}, 00:22:07.768 "ddgst": ${ddgst:-false} 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 } 00:22:07.768 EOF 00:22:07.768 )") 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.768 { 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme$subsystem", 00:22:07.768 "trtype": "$TEST_TRANSPORT", 00:22:07.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "$NVMF_PORT", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.768 "hdgst": ${hdgst:-false}, 00:22:07.768 "ddgst": ${ddgst:-false} 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 } 00:22:07.768 EOF 00:22:07.768 )") 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.768 { 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme$subsystem", 00:22:07.768 "trtype": "$TEST_TRANSPORT", 00:22:07.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "$NVMF_PORT", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.768 "hdgst": ${hdgst:-false}, 00:22:07.768 "ddgst": ${ddgst:-false} 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 } 00:22:07.768 EOF 00:22:07.768 )") 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:07.768 { 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme$subsystem", 00:22:07.768 "trtype": "$TEST_TRANSPORT", 00:22:07.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "$NVMF_PORT", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.768 "hdgst": ${hdgst:-false}, 00:22:07.768 "ddgst": ${ddgst:-false} 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 } 00:22:07.768 EOF 00:22:07.768 )") 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:07.768 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme1", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme2", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme3", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme4", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme5", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme6", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme7", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme8", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme9", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 },{ 00:22:07.768 "params": { 00:22:07.768 "name": "Nvme10", 00:22:07.768 "trtype": "tcp", 00:22:07.768 "traddr": "10.0.0.2", 00:22:07.768 "adrfam": "ipv4", 00:22:07.768 "trsvcid": "4420", 00:22:07.768 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:07.768 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:07.768 "hdgst": false, 00:22:07.768 "ddgst": false 00:22:07.768 }, 00:22:07.768 "method": "bdev_nvme_attach_controller" 00:22:07.768 }' 00:22:07.768 [2024-12-05 14:12:13.953000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.768 [2024-12-05 14:12:13.989353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.680 Running I/O for 10 seconds... 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2793506 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2793506 ']' 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2793506 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2793506 00:22:10.255 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:10.256 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:10.256 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2793506' 00:22:10.256 killing process with pid 2793506 00:22:10.256 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2793506 00:22:10.256 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2793506 00:22:10.256 [2024-12-05 14:12:16.524591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.524956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24985f0 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.256 [2024-12-05 14:12:16.525981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.525986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.525991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.525995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.526195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b070 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.257 [2024-12-05 14:12:16.527247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.527355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2498ae0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.258 [2024-12-05 14:12:16.528831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.528835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24994a0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.529581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2499820 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.259 [2024-12-05 14:12:16.530982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.530986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.530991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.530996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249a1c0 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.531993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.260 [2024-12-05 14:12:16.532161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.532193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.261 [2024-12-05 14:12:16.538653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.538990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.538999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.261 [2024-12-05 14:12:16.539283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.261 [2024-12-05 14:12:16.539290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.539764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.539794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:10.262 [2024-12-05 14:12:16.540226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.262 [2024-12-05 14:12:16.540414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.262 [2024-12-05 14:12:16.540421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.540987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.540996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.541012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.541029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.541045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.541062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.541079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.263 [2024-12-05 14:12:16.541095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.263 [2024-12-05 14:12:16.541104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.264 [2024-12-05 14:12:16.541326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:10.264 [2024-12-05 14:12:16.541528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9bcc0 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.264 [2024-12-05 14:12:16.541684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe99fc0 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.264 [2024-12-05 14:12:16.541824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.264 [2024-12-05 14:12:16.541949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ab80 is same with the state(6) to be set 00:22:10.532 [2024-12-05 14:12:16.551742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3610 is same with the state(6) to be set 00:22:10.532 [2024-12-05 14:12:16.551913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.551971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.551979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3fa0 is same with the state(6) to be set 00:22:10.532 [2024-12-05 14:12:16.551999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.552008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.532 [2024-12-05 14:12:16.552016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.532 [2024-12-05 14:12:16.552023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303c00 is same with the state(6) to be set 00:22:10.533 [2024-12-05 14:12:16.552090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3810 is same with the state(6) to be set 00:22:10.533 [2024-12-05 14:12:16.552183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c62b0 is same with the state(6) to be set 00:22:10.533 [2024-12-05 14:12:16.552264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0920 is same with the state(6) to be set 00:22:10.533 [2024-12-05 14:12:16.552353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91190 is same with the state(6) to be set 00:22:10.533 [2024-12-05 14:12:16.552443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:10.533 [2024-12-05 14:12:16.552505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b850 is same with the state(6) to be set 00:22:10.533 [2024-12-05 14:12:16.552661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.552983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.552991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.533 [2024-12-05 14:12:16.553167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.533 [2024-12-05 14:12:16.553174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.553749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.553756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.556431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:10.534 [2024-12-05 14:12:16.556504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c62b0 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9bcc0 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe99fc0 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3610 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3fa0 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1303c00 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3810 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0920 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe91190 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.556667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9b850 (9): Bad file descriptor 00:22:10.534 [2024-12-05 14:12:16.558164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:10.534 [2024-12-05 14:12:16.558191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:10.534 [2024-12-05 14:12:16.559948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.534 [2024-12-05 14:12:16.559973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c62b0 with addr=10.0.0.2, port=4420 00:22:10.534 [2024-12-05 14:12:16.559982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c62b0 is same with the state(6) to be set 00:22:10.534 [2024-12-05 14:12:16.560312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.534 [2024-12-05 14:12:16.560323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3fa0 with addr=10.0.0.2, port=4420 00:22:10.534 [2024-12-05 14:12:16.560331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3fa0 is same with the state(6) to be set 00:22:10.534 [2024-12-05 14:12:16.560746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.534 [2024-12-05 14:12:16.560785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe99fc0 with addr=10.0.0.2, port=4420 00:22:10.534 [2024-12-05 14:12:16.560796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe99fc0 is same with the state(6) to be set 00:22:10.534 [2024-12-05 14:12:16.561201] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.534 [2024-12-05 14:12:16.561333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.534 [2024-12-05 14:12:16.561471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.534 [2024-12-05 14:12:16.561480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129e0e0 is same with the state(6) to be set 00:22:10.534 [2024-12-05 14:12:16.561636] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.535 [2024-12-05 14:12:16.561666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c62b0 (9): Bad file descriptor 00:22:10.535 [2024-12-05 14:12:16.561677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3fa0 (9): Bad file descriptor 00:22:10.535 [2024-12-05 14:12:16.561687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe99fc0 (9): Bad file descriptor 00:22:10.535 [2024-12-05 14:12:16.561745] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.535 [2024-12-05 14:12:16.561783] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.535 [2024-12-05 14:12:16.562777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.562986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.562993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.535 [2024-12-05 14:12:16.563668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.535 [2024-12-05 14:12:16.563677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.563908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.563917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a05a0 is same with the state(6) to be set 00:22:10.536 [2024-12-05 14:12:16.564004] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:10.536 [2024-12-05 14:12:16.564031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:10.536 [2024-12-05 14:12:16.564064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:10.536 [2024-12-05 14:12:16.564073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:10.536 [2024-12-05 14:12:16.564083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:10.536 [2024-12-05 14:12:16.564091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:10.536 [2024-12-05 14:12:16.564099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:10.536 [2024-12-05 14:12:16.564106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:10.536 [2024-12-05 14:12:16.564113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:10.536 [2024-12-05 14:12:16.564119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:10.536 [2024-12-05 14:12:16.564127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:10.536 [2024-12-05 14:12:16.564133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:10.536 [2024-12-05 14:12:16.564140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:10.536 [2024-12-05 14:12:16.564146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:10.536 [2024-12-05 14:12:16.565413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:10.536 [2024-12-05 14:12:16.565766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.536 [2024-12-05 14:12:16.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c0920 with addr=10.0.0.2, port=4420 00:22:10.536 [2024-12-05 14:12:16.565791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0920 is same with the state(6) to be set 00:22:10.536 [2024-12-05 14:12:16.566350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.536 [2024-12-05 14:12:16.566364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3610 with addr=10.0.0.2, port=4420 00:22:10.536 [2024-12-05 14:12:16.566376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3610 is same with the state(6) to be set 00:22:10.536 [2024-12-05 14:12:16.566385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0920 (9): Bad file descriptor 00:22:10.536 [2024-12-05 14:12:16.566699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3610 (9): Bad file descriptor 00:22:10.536 [2024-12-05 14:12:16.566712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:10.536 [2024-12-05 14:12:16.566719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:10.536 [2024-12-05 14:12:16.566726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:10.536 [2024-12-05 14:12:16.566733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:10.536 [2024-12-05 14:12:16.566838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:10.536 [2024-12-05 14:12:16.566847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:10.536 [2024-12-05 14:12:16.566854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:10.536 [2024-12-05 14:12:16.566860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:10.536 [2024-12-05 14:12:16.566893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.566902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.566914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.566922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.566932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.566939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.566949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.566956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.566966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.566973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.566983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.536 [2024-12-05 14:12:16.567221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.536 [2024-12-05 14:12:16.567229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.567989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.567996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.568005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109fc10 is same with the state(6) to be set 00:22:10.537 [2024-12-05 14:12:16.569287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.537 [2024-12-05 14:12:16.569458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.537 [2024-12-05 14:12:16.569466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.569992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.569999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.570396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.570404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a0bc0 is same with the state(6) to be set 00:22:10.538 [2024-12-05 14:12:16.571684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.571698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.571710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.571719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.538 [2024-12-05 14:12:16.571731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.538 [2024-12-05 14:12:16.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.571987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.571996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.539 [2024-12-05 14:12:16.572578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.539 [2024-12-05 14:12:16.572587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.572819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.572828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129cee0 is same with the state(6) to be set 00:22:10.540 [2024-12-05 14:12:16.574102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.540 [2024-12-05 14:12:16.574921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.540 [2024-12-05 14:12:16.574929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.574939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.574947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.574956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.574964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.574973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.574981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.574991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.574998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.575277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.575285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a2b20 is same with the state(6) to be set 00:22:10.541 [2024-12-05 14:12:16.576551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.576988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.576997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.541 [2024-12-05 14:12:16.577161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.541 [2024-12-05 14:12:16.577168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.542 [2024-12-05 14:12:16.577708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:10.542 [2024-12-05 14:12:16.577717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dc470 is same with the state(6) to be set 00:22:10.542 [2024-12-05 14:12:16.579242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:10.542 [2024-12-05 14:12:16.579269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:10.542 [2024-12-05 14:12:16.579282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:10.542 [2024-12-05 14:12:16.579295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:10.542 [2024-12-05 14:12:16.579381] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:10.542 task offset: 26240 on job bdev=Nvme6n1 fails 00:22:10.542 00:22:10.542 Latency(us) 00:22:10.542 [2024-12-05T13:12:16.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.542 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme1n1 ended in about 0.90 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme1n1 : 0.90 141.93 8.87 70.96 0.00 297135.50 23702.19 246415.36 00:22:10.542 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme2n1 ended in about 0.90 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme2n1 : 0.90 141.55 8.85 70.78 0.00 291612.73 15947.09 274377.39 00:22:10.542 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme3n1 ended in about 0.89 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme3n1 : 0.89 215.59 13.47 71.86 0.00 210470.40 18677.76 246415.36 00:22:10.542 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme4n1 ended in about 0.91 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme4n1 : 0.91 141.18 8.82 70.59 0.00 279812.55 19660.80 244667.73 00:22:10.542 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme5n1 ended in about 0.90 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme5n1 : 0.90 212.19 13.26 7.82 0.00 261954.38 40850.77 265639.25 00:22:10.542 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme6n1 ended in about 0.89 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme6n1 : 0.89 216.27 13.52 72.09 0.00 195613.65 15510.19 225443.84 00:22:10.542 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme7n1 ended in about 0.90 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme7n1 : 0.90 213.79 13.36 71.26 0.00 193422.72 12888.75 249910.61 00:22:10.542 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme8n1 ended in about 0.89 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme8n1 : 0.89 215.97 13.50 71.99 0.00 186418.13 15182.51 249910.61 00:22:10.542 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme9n1 ended in about 0.91 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme9n1 : 0.91 140.79 8.80 70.40 0.00 249017.46 18677.76 246415.36 00:22:10.542 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:10.542 Job: Nvme10n1 ended in about 0.91 seconds with error 00:22:10.542 Verification LBA range: start 0x0 length 0x400 00:22:10.542 Nvme10n1 : 0.91 140.42 8.78 70.21 0.00 243434.95 14854.83 235929.60 00:22:10.542 [2024-12-05T13:12:16.843Z] =================================================================================================================== 00:22:10.543 [2024-12-05T13:12:16.843Z] Total : 1779.69 111.23 647.96 0.00 235725.04 12888.75 274377.39 00:22:10.543 [2024-12-05 14:12:16.605529] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:10.543 [2024-12-05 14:12:16.605580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:10.543 [2024-12-05 14:12:16.606025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.606044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9bcc0 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.606055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9bcc0 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.606337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.606347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9b850 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.606355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9b850 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.606716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.606727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe91190 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.606734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91190 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.607089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.607100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3810 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.607107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3810 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.608457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:10.543 [2024-12-05 14:12:16.608472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:10.543 [2024-12-05 14:12:16.608482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:10.543 [2024-12-05 14:12:16.608491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:10.543 [2024-12-05 14:12:16.608502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:10.543 [2024-12-05 14:12:16.608898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.608912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1303c00 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.608919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303c00 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.608932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9bcc0 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.608945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9b850 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.608954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe91190 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.608964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3810 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.609001] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:10.543 [2024-12-05 14:12:16.609018] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:10.543 [2024-12-05 14:12:16.609029] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:10.543 [2024-12-05 14:12:16.609039] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:10.543 [2024-12-05 14:12:16.609428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.609444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe99fc0 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.609452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe99fc0 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.609776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.609788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f3fa0 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.609795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f3fa0 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.610117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.610127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c62b0 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.610134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c62b0 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.610353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.610363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c0920 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.610371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c0920 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.610710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.543 [2024-12-05 14:12:16.610720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb3610 with addr=10.0.0.2, port=4420 00:22:10.543 [2024-12-05 14:12:16.610727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb3610 is same with the state(6) to be set 00:22:10.543 [2024-12-05 14:12:16.610737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1303c00 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.610747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.610754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.610763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.610772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.610781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.610787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.610793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.610800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.610807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.610817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.610824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.610830] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.610838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.610845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.610851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.610858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.610934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe99fc0 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.610946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f3fa0 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.610955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c62b0 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.610964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c0920 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.610973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb3610 (9): Bad file descriptor 00:22:10.543 [2024-12-05 14:12:16.610982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.610988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.610995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.611001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.611027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.611034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.611041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.611047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.611054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.611061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.611068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.611074] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.611081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.611087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.611094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.611101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.611108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.611117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.611123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.611130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:10.543 [2024-12-05 14:12:16.611137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:10.543 [2024-12-05 14:12:16.611143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:10.543 [2024-12-05 14:12:16.611150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:10.543 [2024-12-05 14:12:16.611157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:10.543 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2793822 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2793822 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2793822 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.926 rmmod nvme_tcp 00:22:11.926 rmmod nvme_fabrics 00:22:11.926 rmmod nvme_keyring 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2793506 ']' 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2793506 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2793506 ']' 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2793506 00:22:11.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2793506) - No such process 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2793506 is not found' 00:22:11.926 Process with pid 2793506 is not found 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.926 14:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.943 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:13.943 00:22:13.943 real 0m7.959s 00:22:13.943 user 0m19.857s 00:22:13.943 sys 0m1.272s 00:22:13.943 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.943 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 ************************************ 00:22:13.943 END TEST nvmf_shutdown_tc3 00:22:13.943 ************************************ 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 ************************************ 00:22:13.943 START TEST nvmf_shutdown_tc4 00:22:13.943 ************************************ 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:13.943 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.943 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:13.944 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:13.944 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:13.944 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.944 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:14.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:22:14.205 00:22:14.205 --- 10.0.0.2 ping statistics --- 00:22:14.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.205 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:14.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:14.205 00:22:14.205 --- 10.0.0.1 ping statistics --- 00:22:14.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.205 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2795277 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2795277 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2795277 ']' 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.205 14:12:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:14.205 [2024-12-05 14:12:20.490952] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:14.205 [2024-12-05 14:12:20.491051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.466 [2024-12-05 14:12:20.589012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:14.466 [2024-12-05 14:12:20.622778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.466 [2024-12-05 14:12:20.622813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.466 [2024-12-05 14:12:20.622818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.466 [2024-12-05 14:12:20.622823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.466 [2024-12-05 14:12:20.622827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.466 [2024-12-05 14:12:20.624175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.466 [2024-12-05 14:12:20.624334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.466 [2024-12-05 14:12:20.624491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:14.466 [2024-12-05 14:12:20.624492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.037 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.299 [2024-12-05 14:12:21.340170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.299 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.299 Malloc1 00:22:15.299 [2024-12-05 14:12:21.448258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.299 Malloc2 00:22:15.299 Malloc3 00:22:15.299 Malloc4 00:22:15.299 Malloc5 00:22:15.560 Malloc6 00:22:15.560 Malloc7 00:22:15.560 Malloc8 00:22:15.560 Malloc9 00:22:15.560 Malloc10 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2795664 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:15.560 14:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:15.822 [2024-12-05 14:12:21.928309] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2795277 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2795277 ']' 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2795277 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2795277 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2795277' 00:22:21.115 killing process with pid 2795277 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2795277 00:22:21.115 14:12:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2795277 00:22:21.115 [2024-12-05 14:12:26.924646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6a60 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.924691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6a60 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.924698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6a60 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.924703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6a60 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6f30 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6f30 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6f30 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6f30 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7400 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7400 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7400 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7400 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6590 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6590 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6590 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6590 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.925998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e6590 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7da0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.929878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170edd0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f2c0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f2c0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f2c0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f2c0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e78d0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e78d0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e78d0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.930413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e78d0 is same with the state(6) to be set 00:22:21.115 [2024-12-05 14:12:26.931893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170fb10 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.931909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170fb10 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.931914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170fb10 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.931920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170fb10 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.931924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170fb10 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170ffe0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170ffe0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170ffe0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170ffe0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17104b0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f640 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f640 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.932875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170f640 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712190 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712190 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712660 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712660 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712660 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1712b30 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 [2024-12-05 14:12:26.933866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711cc0 is same with the state(6) to be set 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 [2024-12-05 14:12:26.934926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 starting I/O failed: -6 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.116 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 [2024-12-05 14:12:26.935771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 [2024-12-05 14:12:26.936390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 [2024-12-05 14:12:26.936405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 starting I/O failed: -6 00:22:21.117 [2024-12-05 14:12:26.936410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 [2024-12-05 14:12:26.936416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 [2024-12-05 14:12:26.936420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 starting I/O failed: -6 00:22:21.117 [2024-12-05 14:12:26.936425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 [2024-12-05 14:12:26.936430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117f0 is same with the state(6) to be set 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 [2024-12-05 14:12:26.936686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.117 Write completed with error (sct=0, sc=8) 00:22:21.117 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 [2024-12-05 14:12:26.938296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.118 NVMe io qpair process completion error 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 [2024-12-05 14:12:26.939448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 [2024-12-05 14:12:26.940265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.118 Write completed with error (sct=0, sc=8) 00:22:21.118 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 [2024-12-05 14:12:26.941196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 [2024-12-05 14:12:26.942649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.119 NVMe io qpair process completion error 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 starting I/O failed: -6 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.119 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 [2024-12-05 14:12:26.943839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 [2024-12-05 14:12:26.944649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.120 starting I/O failed: -6 00:22:21.120 starting I/O failed: -6 00:22:21.120 starting I/O failed: -6 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.120 starting I/O failed: -6 00:22:21.120 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 [2024-12-05 14:12:26.945801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 [2024-12-05 14:12:26.948071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.121 NVMe io qpair process completion error 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 [2024-12-05 14:12:26.949208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.121 starting I/O failed: -6 00:22:21.121 starting I/O failed: -6 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.121 starting I/O failed: -6 00:22:21.121 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 [2024-12-05 14:12:26.950171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 [2024-12-05 14:12:26.951280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.122 Write completed with error (sct=0, sc=8) 00:22:21.122 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 [2024-12-05 14:12:26.952683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.123 NVMe io qpair process completion error 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 [2024-12-05 14:12:26.953858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.123 starting I/O failed: -6 00:22:21.123 starting I/O failed: -6 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 [2024-12-05 14:12:26.954833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 starting I/O failed: -6 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.123 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 [2024-12-05 14:12:26.955748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 [2024-12-05 14:12:26.957965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.124 NVMe io qpair process completion error 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 [2024-12-05 14:12:26.959717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.124 starting I/O failed: -6 00:22:21.124 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 [2024-12-05 14:12:26.960545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 [2024-12-05 14:12:26.961442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.125 starting I/O failed: -6 00:22:21.125 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 [2024-12-05 14:12:26.963078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.126 NVMe io qpair process completion error 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 [2024-12-05 14:12:26.964326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.126 starting I/O failed: -6 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 [2024-12-05 14:12:26.965277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.126 Write completed with error (sct=0, sc=8) 00:22:21.126 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 [2024-12-05 14:12:26.966216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 [2024-12-05 14:12:26.968168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.127 NVMe io qpair process completion error 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 [2024-12-05 14:12:26.969411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 starting I/O failed: -6 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.127 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 [2024-12-05 14:12:26.970208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 [2024-12-05 14:12:26.971136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.128 Write completed with error (sct=0, sc=8) 00:22:21.128 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 [2024-12-05 14:12:26.973030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.129 NVMe io qpair process completion error 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 [2024-12-05 14:12:26.974446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 Write completed with error (sct=0, sc=8) 00:22:21.129 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 [2024-12-05 14:12:26.976044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 [2024-12-05 14:12:26.977695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:21.130 NVMe io qpair process completion error 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.130 starting I/O failed: -6 00:22:21.130 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 [2024-12-05 14:12:26.980005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.131 starting I/O failed: -6 00:22:21.131 Write completed with error (sct=0, sc=8) 00:22:21.132 starting I/O failed: -6 00:22:21.132 Write completed with error (sct=0, sc=8) 00:22:21.132 starting I/O failed: -6 00:22:21.132 [2024-12-05 14:12:26.984125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:21.132 NVMe io qpair process completion error 00:22:21.132 Initializing NVMe Controllers 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:21.132 Controller IO queue size 128, less than required. 00:22:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:21.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:21.132 Initialization complete. Launching workers. 00:22:21.132 ======================================================== 00:22:21.132 Latency(us) 00:22:21.132 Device Information : IOPS MiB/s Average min max 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1886.38 81.06 67871.51 682.94 121930.47 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1856.66 79.78 68979.21 750.89 123752.63 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1872.49 80.46 68430.02 744.30 149182.51 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1874.41 80.54 68381.51 734.44 122429.68 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1853.03 79.62 69209.61 862.57 124821.45 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1882.53 80.89 68149.96 570.21 121142.42 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1870.13 80.36 68628.09 816.76 128864.04 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1896.43 81.49 67702.93 789.35 120100.61 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1907.76 81.97 67313.73 506.92 132403.12 00:22:21.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1874.20 80.53 67843.06 876.65 119524.63 00:22:21.132 ======================================================== 00:22:21.132 Total : 18774.04 806.70 68246.49 506.92 149182.51 00:22:21.132 00:22:21.132 [2024-12-05 14:12:26.988110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bcae0 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba890 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bb410 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bb740 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc720 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba560 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16babc0 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bba70 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16baef0 is same with the state(6) to be set 00:22:21.132 [2024-12-05 14:12:26.988400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc900 is same with the state(6) to be set 00:22:21.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:21.132 14:12:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2795664 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2795664 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2795664 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:22.078 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:22.079 rmmod nvme_tcp 00:22:22.079 rmmod nvme_fabrics 00:22:22.079 rmmod nvme_keyring 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2795277 ']' 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2795277 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2795277 ']' 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2795277 00:22:22.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2795277) - No such process 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2795277 is not found' 00:22:22.079 Process with pid 2795277 is not found 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.079 14:12:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.626 00:22:24.626 real 0m10.277s 00:22:24.626 user 0m27.996s 00:22:24.626 sys 0m4.016s 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:24.626 ************************************ 00:22:24.626 END TEST nvmf_shutdown_tc4 00:22:24.626 ************************************ 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:24.626 00:22:24.626 real 0m43.904s 00:22:24.626 user 1m47.239s 00:22:24.626 sys 0m13.969s 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.626 ************************************ 00:22:24.626 END TEST nvmf_shutdown 00:22:24.626 ************************************ 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.626 ************************************ 00:22:24.626 START TEST nvmf_nsid 00:22:24.626 ************************************ 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:24.626 * Looking for test storage... 00:22:24.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.626 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.627 --rc genhtml_branch_coverage=1 00:22:24.627 --rc genhtml_function_coverage=1 00:22:24.627 --rc genhtml_legend=1 00:22:24.627 --rc geninfo_all_blocks=1 00:22:24.627 --rc geninfo_unexecuted_blocks=1 00:22:24.627 00:22:24.627 ' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.627 --rc genhtml_branch_coverage=1 00:22:24.627 --rc genhtml_function_coverage=1 00:22:24.627 --rc genhtml_legend=1 00:22:24.627 --rc geninfo_all_blocks=1 00:22:24.627 --rc geninfo_unexecuted_blocks=1 00:22:24.627 00:22:24.627 ' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.627 --rc genhtml_branch_coverage=1 00:22:24.627 --rc genhtml_function_coverage=1 00:22:24.627 --rc genhtml_legend=1 00:22:24.627 --rc geninfo_all_blocks=1 00:22:24.627 --rc geninfo_unexecuted_blocks=1 00:22:24.627 00:22:24.627 ' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.627 --rc genhtml_branch_coverage=1 00:22:24.627 --rc genhtml_function_coverage=1 00:22:24.627 --rc genhtml_legend=1 00:22:24.627 --rc geninfo_all_blocks=1 00:22:24.627 --rc geninfo_unexecuted_blocks=1 00:22:24.627 00:22:24.627 ' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.627 14:12:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.774 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:32.775 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:32.775 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:32.775 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:32.775 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.775 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:22:32.775 00:22:32.775 --- 10.0.0.2 ping statistics --- 00:22:32.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.775 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:22:32.775 00:22:32.775 --- 10.0.0.1 ping statistics --- 00:22:32.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.775 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2801011 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2801011 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2801011 ']' 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.775 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.776 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.776 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.776 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:32.776 [2024-12-05 14:12:38.265024] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:32.776 [2024-12-05 14:12:38.265094] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.776 [2024-12-05 14:12:38.365424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.776 [2024-12-05 14:12:38.416126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.776 [2024-12-05 14:12:38.416178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.776 [2024-12-05 14:12:38.416186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.776 [2024-12-05 14:12:38.416194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.776 [2024-12-05 14:12:38.416200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.776 [2024-12-05 14:12:38.416959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2801131 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=987fd9bd-14b5-4ac8-a963-f9fe3e9ff10f 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6aeb0989-5e46-4243-85df-e7a64e3e94e3 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2db2b6f3-c95c-4e2d-8613-5bce0b717f45 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:33.038 null0 00:22:33.038 null1 00:22:33.038 [2024-12-05 14:12:39.196175] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:33.038 [2024-12-05 14:12:39.196246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801131 ] 00:22:33.038 null2 00:22:33.038 [2024-12-05 14:12:39.200278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.038 [2024-12-05 14:12:39.224592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2801131 /var/tmp/tgt2.sock 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2801131 ']' 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:33.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:33.038 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:33.038 [2024-12-05 14:12:39.290786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.300 [2024-12-05 14:12:39.343010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.560 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.560 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:33.560 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:33.822 [2024-12-05 14:12:39.909820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.822 [2024-12-05 14:12:39.926005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:33.822 nvme0n1 nvme0n2 00:22:33.822 nvme1n1 00:22:33.822 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:33.822 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:33.822 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:35.209 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 987fd9bd-14b5-4ac8-a963-f9fe3e9ff10f 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:36.153 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=987fd9bd14b54ac8a963f9fe3e9ff10f 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 987FD9BD14B54AC8A963F9FE3E9FF10F 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 987FD9BD14B54AC8A963F9FE3E9FF10F == \9\8\7\F\D\9\B\D\1\4\B\5\4\A\C\8\A\9\6\3\F\9\F\E\3\E\9\F\F\1\0\F ]] 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6aeb0989-5e46-4243-85df-e7a64e3e94e3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6aeb09895e46424385dfe7a64e3e94e3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6AEB09895E46424385DFE7A64E3E94E3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6AEB09895E46424385DFE7A64E3E94E3 == \6\A\E\B\0\9\8\9\5\E\4\6\4\2\4\3\8\5\D\F\E\7\A\6\4\E\3\E\9\4\E\3 ]] 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2db2b6f3-c95c-4e2d-8613-5bce0b717f45 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2db2b6f3c95c4e2d86135bce0b717f45 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2DB2B6F3C95C4E2D86135BCE0B717F45 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2DB2B6F3C95C4E2D86135BCE0B717F45 == \2\D\B\2\B\6\F\3\C\9\5\C\4\E\2\D\8\6\1\3\5\B\C\E\0\B\7\1\7\F\4\5 ]] 00:22:36.415 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2801131 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2801131 ']' 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2801131 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801131 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801131' 00:22:36.676 killing process with pid 2801131 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2801131 00:22:36.676 14:12:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2801131 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.937 rmmod nvme_tcp 00:22:36.937 rmmod nvme_fabrics 00:22:36.937 rmmod nvme_keyring 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2801011 ']' 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2801011 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2801011 ']' 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2801011 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.937 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2801011 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2801011' 00:22:37.199 killing process with pid 2801011 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2801011 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2801011 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.199 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.743 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.743 00:22:39.743 real 0m14.958s 00:22:39.743 user 0m11.445s 00:22:39.743 sys 0m6.831s 00:22:39.743 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.743 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:39.743 ************************************ 00:22:39.743 END TEST nvmf_nsid 00:22:39.743 ************************************ 00:22:39.743 14:12:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:39.743 00:22:39.743 real 12m57.079s 00:22:39.743 user 27m9.768s 00:22:39.743 sys 3m50.408s 00:22:39.743 14:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.743 14:12:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.743 ************************************ 00:22:39.743 END TEST nvmf_target_extra 00:22:39.743 ************************************ 00:22:39.743 14:12:45 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:39.743 14:12:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.744 14:12:45 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.744 14:12:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.744 ************************************ 00:22:39.744 START TEST nvmf_host 00:22:39.744 ************************************ 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:39.744 * Looking for test storage... 00:22:39.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:39.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.744 --rc genhtml_branch_coverage=1 00:22:39.744 --rc genhtml_function_coverage=1 00:22:39.744 --rc genhtml_legend=1 00:22:39.744 --rc geninfo_all_blocks=1 00:22:39.744 --rc geninfo_unexecuted_blocks=1 00:22:39.744 00:22:39.744 ' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:39.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.744 --rc genhtml_branch_coverage=1 00:22:39.744 --rc genhtml_function_coverage=1 00:22:39.744 --rc genhtml_legend=1 00:22:39.744 --rc geninfo_all_blocks=1 00:22:39.744 --rc geninfo_unexecuted_blocks=1 00:22:39.744 00:22:39.744 ' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:39.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.744 --rc genhtml_branch_coverage=1 00:22:39.744 --rc genhtml_function_coverage=1 00:22:39.744 --rc genhtml_legend=1 00:22:39.744 --rc geninfo_all_blocks=1 00:22:39.744 --rc geninfo_unexecuted_blocks=1 00:22:39.744 00:22:39.744 ' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:39.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.744 --rc genhtml_branch_coverage=1 00:22:39.744 --rc genhtml_function_coverage=1 00:22:39.744 --rc genhtml_legend=1 00:22:39.744 --rc geninfo_all_blocks=1 00:22:39.744 --rc geninfo_unexecuted_blocks=1 00:22:39.744 00:22:39.744 ' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.744 14:12:45 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.745 ************************************ 00:22:39.745 START TEST nvmf_multicontroller 00:22:39.745 ************************************ 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:39.745 * Looking for test storage... 00:22:39.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:39.745 14:12:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:39.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.745 --rc genhtml_branch_coverage=1 00:22:39.745 --rc genhtml_function_coverage=1 00:22:39.745 --rc genhtml_legend=1 00:22:39.745 --rc geninfo_all_blocks=1 00:22:39.745 --rc geninfo_unexecuted_blocks=1 00:22:39.745 00:22:39.745 ' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:39.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.745 --rc genhtml_branch_coverage=1 00:22:39.745 --rc genhtml_function_coverage=1 00:22:39.745 --rc genhtml_legend=1 00:22:39.745 --rc geninfo_all_blocks=1 00:22:39.745 --rc geninfo_unexecuted_blocks=1 00:22:39.745 00:22:39.745 ' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:39.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.745 --rc genhtml_branch_coverage=1 00:22:39.745 --rc genhtml_function_coverage=1 00:22:39.745 --rc genhtml_legend=1 00:22:39.745 --rc geninfo_all_blocks=1 00:22:39.745 --rc geninfo_unexecuted_blocks=1 00:22:39.745 00:22:39.745 ' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:39.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.745 --rc genhtml_branch_coverage=1 00:22:39.745 --rc genhtml_function_coverage=1 00:22:39.745 --rc genhtml_legend=1 00:22:39.745 --rc geninfo_all_blocks=1 00:22:39.745 --rc geninfo_unexecuted_blocks=1 00:22:39.745 00:22:39.745 ' 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.745 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.746 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.746 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.746 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.746 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.746 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.006 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.007 14:12:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.140 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.140 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:48.141 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:48.141 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:48.141 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:48.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:22:48.141 00:22:48.141 --- 10.0.0.2 ping statistics --- 00:22:48.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.141 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:22:48.141 00:22:48.141 --- 10.0.0.1 ping statistics --- 00:22:48.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.141 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.141 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2806211 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2806211 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2806211 ']' 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.142 14:12:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.142 [2024-12-05 14:12:53.691613] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:48.142 [2024-12-05 14:12:53.691677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.142 [2024-12-05 14:12:53.790757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:48.142 [2024-12-05 14:12:53.843013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.142 [2024-12-05 14:12:53.843065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.142 [2024-12-05 14:12:53.843074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.142 [2024-12-05 14:12:53.843081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.142 [2024-12-05 14:12:53.843087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.142 [2024-12-05 14:12:53.844969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.142 [2024-12-05 14:12:53.845136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.142 [2024-12-05 14:12:53.845137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 [2024-12-05 14:12:54.565673] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 Malloc0 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 [2024-12-05 14:12:54.638730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 [2024-12-05 14:12:54.650596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 Malloc1 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.403 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2806505 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2806505 /var/tmp/bdevperf.sock 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2806505 ']' 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.665 14:12:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.606 NVMe0n1 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.606 1 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.606 request: 00:22:49.606 { 00:22:49.606 "name": "NVMe0", 00:22:49.606 "trtype": "tcp", 00:22:49.606 "traddr": "10.0.0.2", 00:22:49.606 "adrfam": "ipv4", 00:22:49.606 "trsvcid": "4420", 00:22:49.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.606 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:49.606 "hostaddr": "10.0.0.1", 00:22:49.606 "prchk_reftag": false, 00:22:49.606 "prchk_guard": false, 00:22:49.606 "hdgst": false, 00:22:49.606 "ddgst": false, 00:22:49.606 "allow_unrecognized_csi": false, 00:22:49.606 "method": "bdev_nvme_attach_controller", 00:22:49.606 "req_id": 1 00:22:49.606 } 00:22:49.606 Got JSON-RPC error response 00:22:49.606 response: 00:22:49.606 { 00:22:49.606 "code": -114, 00:22:49.606 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:49.606 } 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:49.606 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.607 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:49.607 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.607 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:49.607 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.607 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.866 request: 00:22:49.866 { 00:22:49.866 "name": "NVMe0", 00:22:49.866 "trtype": "tcp", 00:22:49.866 "traddr": "10.0.0.2", 00:22:49.866 "adrfam": "ipv4", 00:22:49.866 "trsvcid": "4420", 00:22:49.866 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:49.866 "hostaddr": "10.0.0.1", 00:22:49.866 "prchk_reftag": false, 00:22:49.866 "prchk_guard": false, 00:22:49.866 "hdgst": false, 00:22:49.866 "ddgst": false, 00:22:49.866 "allow_unrecognized_csi": false, 00:22:49.866 "method": "bdev_nvme_attach_controller", 00:22:49.866 "req_id": 1 00:22:49.866 } 00:22:49.866 Got JSON-RPC error response 00:22:49.866 response: 00:22:49.866 { 00:22:49.866 "code": -114, 00:22:49.866 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:49.866 } 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.866 request: 00:22:49.866 { 00:22:49.866 "name": "NVMe0", 00:22:49.866 "trtype": "tcp", 00:22:49.866 "traddr": "10.0.0.2", 00:22:49.866 "adrfam": "ipv4", 00:22:49.866 "trsvcid": "4420", 00:22:49.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.866 "hostaddr": "10.0.0.1", 00:22:49.866 "prchk_reftag": false, 00:22:49.866 "prchk_guard": false, 00:22:49.866 "hdgst": false, 00:22:49.866 "ddgst": false, 00:22:49.866 "multipath": "disable", 00:22:49.866 "allow_unrecognized_csi": false, 00:22:49.866 "method": "bdev_nvme_attach_controller", 00:22:49.866 "req_id": 1 00:22:49.866 } 00:22:49.866 Got JSON-RPC error response 00:22:49.866 response: 00:22:49.866 { 00:22:49.866 "code": -114, 00:22:49.866 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:49.866 } 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.866 request: 00:22:49.866 { 00:22:49.866 "name": "NVMe0", 00:22:49.866 "trtype": "tcp", 00:22:49.866 "traddr": "10.0.0.2", 00:22:49.866 "adrfam": "ipv4", 00:22:49.866 "trsvcid": "4420", 00:22:49.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.866 "hostaddr": "10.0.0.1", 00:22:49.866 "prchk_reftag": false, 00:22:49.866 "prchk_guard": false, 00:22:49.866 "hdgst": false, 00:22:49.866 "ddgst": false, 00:22:49.866 "multipath": "failover", 00:22:49.866 "allow_unrecognized_csi": false, 00:22:49.866 "method": "bdev_nvme_attach_controller", 00:22:49.866 "req_id": 1 00:22:49.866 } 00:22:49.866 Got JSON-RPC error response 00:22:49.866 response: 00:22:49.866 { 00:22:49.866 "code": -114, 00:22:49.866 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:49.866 } 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.866 14:12:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.866 NVMe0n1 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.866 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.125 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:50.125 14:12:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:51.067 { 00:22:51.067 "results": [ 00:22:51.067 { 00:22:51.067 "job": "NVMe0n1", 00:22:51.067 "core_mask": "0x1", 00:22:51.067 "workload": "write", 00:22:51.067 "status": "finished", 00:22:51.067 "queue_depth": 128, 00:22:51.067 "io_size": 4096, 00:22:51.067 "runtime": 1.007043, 00:22:51.067 "iops": 24738.764878957503, 00:22:51.067 "mibps": 96.63580030842775, 00:22:51.067 "io_failed": 0, 00:22:51.067 "io_timeout": 0, 00:22:51.067 "avg_latency_us": 5162.2843399028625, 00:22:51.067 "min_latency_us": 2102.6133333333332, 00:22:51.067 "max_latency_us": 13981.013333333334 00:22:51.067 } 00:22:51.067 ], 00:22:51.067 "core_count": 1 00:22:51.067 } 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2806505 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2806505 ']' 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2806505 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806505 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806505' 00:22:51.328 killing process with pid 2806505 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2806505 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2806505 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:51.328 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:51.328 [2024-12-05 14:12:54.765678] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:22:51.328 [2024-12-05 14:12:54.765763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2806505 ] 00:22:51.328 [2024-12-05 14:12:54.859153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.328 [2024-12-05 14:12:54.911406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.328 [2024-12-05 14:12:56.225410] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 2a603536-7a03-4303-919b-0d61b481cb77 already exists 00:22:51.328 [2024-12-05 14:12:56.225439] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:2a603536-7a03-4303-919b-0d61b481cb77 alias for bdev NVMe1n1 00:22:51.328 [2024-12-05 14:12:56.225448] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:51.328 Running I/O for 1 seconds... 00:22:51.328 24707.00 IOPS, 96.51 MiB/s 00:22:51.328 Latency(us) 00:22:51.328 [2024-12-05T13:12:57.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.328 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:51.328 NVMe0n1 : 1.01 24738.76 96.64 0.00 0.00 5162.28 2102.61 13981.01 00:22:51.328 [2024-12-05T13:12:57.628Z] =================================================================================================================== 00:22:51.328 [2024-12-05T13:12:57.628Z] Total : 24738.76 96.64 0.00 0.00 5162.28 2102.61 13981.01 00:22:51.328 Received shutdown signal, test time was about 1.000000 seconds 00:22:51.328 00:22:51.328 Latency(us) 00:22:51.328 [2024-12-05T13:12:57.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.328 [2024-12-05T13:12:57.628Z] =================================================================================================================== 00:22:51.328 [2024-12-05T13:12:57.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.328 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.328 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.589 rmmod nvme_tcp 00:22:51.589 rmmod nvme_fabrics 00:22:51.589 rmmod nvme_keyring 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2806211 ']' 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2806211 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2806211 ']' 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2806211 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2806211 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2806211' 00:22:51.589 killing process with pid 2806211 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2806211 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2806211 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.589 14:12:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.135 00:22:54.135 real 0m14.132s 00:22:54.135 user 0m17.378s 00:22:54.135 sys 0m6.648s 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.135 ************************************ 00:22:54.135 END TEST nvmf_multicontroller 00:22:54.135 ************************************ 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.135 14:12:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.135 ************************************ 00:22:54.135 START TEST nvmf_aer 00:22:54.135 ************************************ 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:54.135 * Looking for test storage... 00:22:54.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:54.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.135 --rc genhtml_branch_coverage=1 00:22:54.135 --rc genhtml_function_coverage=1 00:22:54.135 --rc genhtml_legend=1 00:22:54.135 --rc geninfo_all_blocks=1 00:22:54.135 --rc geninfo_unexecuted_blocks=1 00:22:54.135 00:22:54.135 ' 00:22:54.135 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:54.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.136 --rc genhtml_branch_coverage=1 00:22:54.136 --rc genhtml_function_coverage=1 00:22:54.136 --rc genhtml_legend=1 00:22:54.136 --rc geninfo_all_blocks=1 00:22:54.136 --rc geninfo_unexecuted_blocks=1 00:22:54.136 00:22:54.136 ' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:54.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.136 --rc genhtml_branch_coverage=1 00:22:54.136 --rc genhtml_function_coverage=1 00:22:54.136 --rc genhtml_legend=1 00:22:54.136 --rc geninfo_all_blocks=1 00:22:54.136 --rc geninfo_unexecuted_blocks=1 00:22:54.136 00:22:54.136 ' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:54.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.136 --rc genhtml_branch_coverage=1 00:22:54.136 --rc genhtml_function_coverage=1 00:22:54.136 --rc genhtml_legend=1 00:22:54.136 --rc geninfo_all_blocks=1 00:22:54.136 --rc geninfo_unexecuted_blocks=1 00:22:54.136 00:22:54.136 ' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.136 14:13:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:02.278 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:02.278 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:02.278 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:02.278 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:02.278 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:23:02.279 00:23:02.279 --- 10.0.0.2 ping statistics --- 00:23:02.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.279 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:23:02.279 00:23:02.279 --- 10.0.0.1 ping statistics --- 00:23:02.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.279 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2811195 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2811195 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2811195 ']' 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.279 14:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.279 [2024-12-05 14:13:07.847409] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:23:02.279 [2024-12-05 14:13:07.847486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.279 [2024-12-05 14:13:07.950690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.279 [2024-12-05 14:13:08.004609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.279 [2024-12-05 14:13:08.004666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.280 [2024-12-05 14:13:08.004675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.280 [2024-12-05 14:13:08.004682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.280 [2024-12-05 14:13:08.004689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.280 [2024-12-05 14:13:08.006784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.280 [2024-12-05 14:13:08.006941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.280 [2024-12-05 14:13:08.007103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.280 [2024-12-05 14:13:08.007103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 [2024-12-05 14:13:08.721112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 Malloc0 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 [2024-12-05 14:13:08.796298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.540 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 [ 00:23:02.540 { 00:23:02.540 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:02.540 "subtype": "Discovery", 00:23:02.540 "listen_addresses": [], 00:23:02.540 "allow_any_host": true, 00:23:02.540 "hosts": [] 00:23:02.540 }, 00:23:02.540 { 00:23:02.540 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.540 "subtype": "NVMe", 00:23:02.540 "listen_addresses": [ 00:23:02.540 { 00:23:02.540 "trtype": "TCP", 00:23:02.540 "adrfam": "IPv4", 00:23:02.540 "traddr": "10.0.0.2", 00:23:02.540 "trsvcid": "4420" 00:23:02.540 } 00:23:02.540 ], 00:23:02.540 "allow_any_host": true, 00:23:02.540 "hosts": [], 00:23:02.540 "serial_number": "SPDK00000000000001", 00:23:02.540 "model_number": "SPDK bdev Controller", 00:23:02.540 "max_namespaces": 2, 00:23:02.540 "min_cntlid": 1, 00:23:02.540 "max_cntlid": 65519, 00:23:02.540 "namespaces": [ 00:23:02.540 { 00:23:02.540 "nsid": 1, 00:23:02.540 "bdev_name": "Malloc0", 00:23:02.540 "name": "Malloc0", 00:23:02.541 "nguid": "24178309293D4006957F33855B4CBDB9", 00:23:02.541 "uuid": "24178309-293d-4006-957f-33855b4cbdb9" 00:23:02.541 } 00:23:02.541 ] 00:23:02.541 } 00:23:02.541 ] 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2811545 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:02.541 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:02.801 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:02.801 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:02.801 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:02.801 14:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:02.801 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:02.801 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:02.801 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:02.801 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:02.801 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.801 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.801 Malloc1 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.802 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.063 Asynchronous Event Request test 00:23:03.063 Attaching to 10.0.0.2 00:23:03.063 Attached to 10.0.0.2 00:23:03.063 Registering asynchronous event callbacks... 00:23:03.063 Starting namespace attribute notice tests for all controllers... 00:23:03.063 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:03.063 aer_cb - Changed Namespace 00:23:03.063 Cleaning up... 00:23:03.063 [ 00:23:03.063 { 00:23:03.063 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:03.063 "subtype": "Discovery", 00:23:03.063 "listen_addresses": [], 00:23:03.063 "allow_any_host": true, 00:23:03.063 "hosts": [] 00:23:03.063 }, 00:23:03.063 { 00:23:03.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.063 "subtype": "NVMe", 00:23:03.063 "listen_addresses": [ 00:23:03.063 { 00:23:03.063 "trtype": "TCP", 00:23:03.063 "adrfam": "IPv4", 00:23:03.063 "traddr": "10.0.0.2", 00:23:03.063 "trsvcid": "4420" 00:23:03.063 } 00:23:03.063 ], 00:23:03.063 "allow_any_host": true, 00:23:03.063 "hosts": [], 00:23:03.063 "serial_number": "SPDK00000000000001", 00:23:03.063 "model_number": "SPDK bdev Controller", 00:23:03.063 "max_namespaces": 2, 00:23:03.063 "min_cntlid": 1, 00:23:03.063 "max_cntlid": 65519, 00:23:03.063 "namespaces": [ 00:23:03.063 { 00:23:03.063 "nsid": 1, 00:23:03.063 "bdev_name": "Malloc0", 00:23:03.063 "name": "Malloc0", 00:23:03.063 "nguid": "24178309293D4006957F33855B4CBDB9", 00:23:03.063 "uuid": "24178309-293d-4006-957f-33855b4cbdb9" 00:23:03.063 }, 00:23:03.063 { 00:23:03.063 "nsid": 2, 00:23:03.063 "bdev_name": "Malloc1", 00:23:03.063 "name": "Malloc1", 00:23:03.063 "nguid": "03278F3DBE06497CBB5D2FA02D22D86D", 00:23:03.063 "uuid": "03278f3d-be06-497c-bb5d-2fa02d22d86d" 00:23:03.063 } 00:23:03.063 ] 00:23:03.063 } 00:23:03.063 ] 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2811545 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:03.063 rmmod nvme_tcp 00:23:03.063 rmmod nvme_fabrics 00:23:03.063 rmmod nvme_keyring 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2811195 ']' 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2811195 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2811195 ']' 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2811195 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2811195 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2811195' 00:23:03.063 killing process with pid 2811195 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2811195 00:23:03.063 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2811195 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.324 14:13:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.869 14:13:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:05.869 00:23:05.869 real 0m11.534s 00:23:05.869 user 0m8.055s 00:23:05.869 sys 0m6.289s 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:05.870 ************************************ 00:23:05.870 END TEST nvmf_aer 00:23:05.870 ************************************ 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.870 ************************************ 00:23:05.870 START TEST nvmf_async_init 00:23:05.870 ************************************ 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:05.870 * Looking for test storage... 00:23:05.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.870 --rc genhtml_branch_coverage=1 00:23:05.870 --rc genhtml_function_coverage=1 00:23:05.870 --rc genhtml_legend=1 00:23:05.870 --rc geninfo_all_blocks=1 00:23:05.870 --rc geninfo_unexecuted_blocks=1 00:23:05.870 00:23:05.870 ' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.870 --rc genhtml_branch_coverage=1 00:23:05.870 --rc genhtml_function_coverage=1 00:23:05.870 --rc genhtml_legend=1 00:23:05.870 --rc geninfo_all_blocks=1 00:23:05.870 --rc geninfo_unexecuted_blocks=1 00:23:05.870 00:23:05.870 ' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.870 --rc genhtml_branch_coverage=1 00:23:05.870 --rc genhtml_function_coverage=1 00:23:05.870 --rc genhtml_legend=1 00:23:05.870 --rc geninfo_all_blocks=1 00:23:05.870 --rc geninfo_unexecuted_blocks=1 00:23:05.870 00:23:05.870 ' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.870 --rc genhtml_branch_coverage=1 00:23:05.870 --rc genhtml_function_coverage=1 00:23:05.870 --rc genhtml_legend=1 00:23:05.870 --rc geninfo_all_blocks=1 00:23:05.870 --rc geninfo_unexecuted_blocks=1 00:23:05.870 00:23:05.870 ' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.870 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7c2147ca71b84930ad0942631bbd18d3 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.871 14:13:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.010 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.010 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.010 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.010 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:14.011 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:14.011 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:14.011 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:14.011 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.011 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:23:14.012 00:23:14.012 --- 10.0.0.2 ping statistics --- 00:23:14.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.012 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:23:14.012 00:23:14.012 --- 10.0.0.1 ping statistics --- 00:23:14.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.012 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2815793 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2815793 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2815793 ']' 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.012 14:13:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.012 [2024-12-05 14:13:19.492848] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:23:14.012 [2024-12-05 14:13:19.492910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.012 [2024-12-05 14:13:19.591798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.012 [2024-12-05 14:13:19.643782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.012 [2024-12-05 14:13:19.643834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.012 [2024-12-05 14:13:19.643843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.012 [2024-12-05 14:13:19.643849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.012 [2024-12-05 14:13:19.643856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.012 [2024-12-05 14:13:19.644619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.274 [2024-12-05 14:13:20.372026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.274 null0 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.274 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7c2147ca71b84930ad0942631bbd18d3 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.275 [2024-12-05 14:13:20.432354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.275 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.536 nvme0n1 00:23:14.536 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.536 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:14.536 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.536 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.536 [ 00:23:14.536 { 00:23:14.536 "name": "nvme0n1", 00:23:14.536 "aliases": [ 00:23:14.536 "7c2147ca-71b8-4930-ad09-42631bbd18d3" 00:23:14.536 ], 00:23:14.536 "product_name": "NVMe disk", 00:23:14.536 "block_size": 512, 00:23:14.536 "num_blocks": 2097152, 00:23:14.536 "uuid": "7c2147ca-71b8-4930-ad09-42631bbd18d3", 00:23:14.536 "numa_id": 0, 00:23:14.536 "assigned_rate_limits": { 00:23:14.536 "rw_ios_per_sec": 0, 00:23:14.536 "rw_mbytes_per_sec": 0, 00:23:14.536 "r_mbytes_per_sec": 0, 00:23:14.536 "w_mbytes_per_sec": 0 00:23:14.536 }, 00:23:14.536 "claimed": false, 00:23:14.536 "zoned": false, 00:23:14.536 "supported_io_types": { 00:23:14.536 "read": true, 00:23:14.536 "write": true, 00:23:14.536 "unmap": false, 00:23:14.536 "flush": true, 00:23:14.536 "reset": true, 00:23:14.536 "nvme_admin": true, 00:23:14.536 "nvme_io": true, 00:23:14.536 "nvme_io_md": false, 00:23:14.536 "write_zeroes": true, 00:23:14.536 "zcopy": false, 00:23:14.536 "get_zone_info": false, 00:23:14.536 "zone_management": false, 00:23:14.536 "zone_append": false, 00:23:14.536 "compare": true, 00:23:14.536 "compare_and_write": true, 00:23:14.536 "abort": true, 00:23:14.536 "seek_hole": false, 00:23:14.536 "seek_data": false, 00:23:14.536 "copy": true, 00:23:14.536 "nvme_iov_md": false 00:23:14.536 }, 00:23:14.536 "memory_domains": [ 00:23:14.536 { 00:23:14.536 "dma_device_id": "system", 00:23:14.536 "dma_device_type": 1 00:23:14.536 } 00:23:14.536 ], 00:23:14.536 "driver_specific": { 00:23:14.536 "nvme": [ 00:23:14.536 { 00:23:14.536 "trid": { 00:23:14.536 "trtype": "TCP", 00:23:14.536 "adrfam": "IPv4", 00:23:14.536 "traddr": "10.0.0.2", 00:23:14.536 "trsvcid": "4420", 00:23:14.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:14.536 }, 00:23:14.536 "ctrlr_data": { 00:23:14.536 "cntlid": 1, 00:23:14.536 "vendor_id": "0x8086", 00:23:14.536 "model_number": "SPDK bdev Controller", 00:23:14.536 "serial_number": "00000000000000000000", 00:23:14.536 "firmware_revision": "25.01", 00:23:14.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.536 "oacs": { 00:23:14.536 "security": 0, 00:23:14.536 "format": 0, 00:23:14.536 "firmware": 0, 00:23:14.536 "ns_manage": 0 00:23:14.536 }, 00:23:14.536 "multi_ctrlr": true, 00:23:14.536 "ana_reporting": false 00:23:14.536 }, 00:23:14.536 "vs": { 00:23:14.536 "nvme_version": "1.3" 00:23:14.536 }, 00:23:14.536 "ns_data": { 00:23:14.536 "id": 1, 00:23:14.536 "can_share": true 00:23:14.536 } 00:23:14.536 } 00:23:14.536 ], 00:23:14.536 "mp_policy": "active_passive" 00:23:14.536 } 00:23:14.536 } 00:23:14.537 ] 00:23:14.537 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.537 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:14.537 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.537 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.537 [2024-12-05 14:13:20.710090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:14.537 [2024-12-05 14:13:20.710184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2221f50 (9): Bad file descriptor 00:23:14.798 [2024-12-05 14:13:20.842566] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:14.798 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.798 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:14.798 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.798 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.798 [ 00:23:14.798 { 00:23:14.798 "name": "nvme0n1", 00:23:14.798 "aliases": [ 00:23:14.798 "7c2147ca-71b8-4930-ad09-42631bbd18d3" 00:23:14.798 ], 00:23:14.798 "product_name": "NVMe disk", 00:23:14.798 "block_size": 512, 00:23:14.798 "num_blocks": 2097152, 00:23:14.798 "uuid": "7c2147ca-71b8-4930-ad09-42631bbd18d3", 00:23:14.798 "numa_id": 0, 00:23:14.798 "assigned_rate_limits": { 00:23:14.798 "rw_ios_per_sec": 0, 00:23:14.798 "rw_mbytes_per_sec": 0, 00:23:14.798 "r_mbytes_per_sec": 0, 00:23:14.798 "w_mbytes_per_sec": 0 00:23:14.798 }, 00:23:14.798 "claimed": false, 00:23:14.798 "zoned": false, 00:23:14.798 "supported_io_types": { 00:23:14.798 "read": true, 00:23:14.798 "write": true, 00:23:14.798 "unmap": false, 00:23:14.798 "flush": true, 00:23:14.798 "reset": true, 00:23:14.798 "nvme_admin": true, 00:23:14.798 "nvme_io": true, 00:23:14.798 "nvme_io_md": false, 00:23:14.798 "write_zeroes": true, 00:23:14.798 "zcopy": false, 00:23:14.798 "get_zone_info": false, 00:23:14.798 "zone_management": false, 00:23:14.798 "zone_append": false, 00:23:14.798 "compare": true, 00:23:14.798 "compare_and_write": true, 00:23:14.798 "abort": true, 00:23:14.798 "seek_hole": false, 00:23:14.798 "seek_data": false, 00:23:14.798 "copy": true, 00:23:14.798 "nvme_iov_md": false 00:23:14.798 }, 00:23:14.798 "memory_domains": [ 00:23:14.798 { 00:23:14.798 "dma_device_id": "system", 00:23:14.798 "dma_device_type": 1 00:23:14.798 } 00:23:14.798 ], 00:23:14.798 "driver_specific": { 00:23:14.798 "nvme": [ 00:23:14.798 { 00:23:14.798 "trid": { 00:23:14.798 "trtype": "TCP", 00:23:14.798 "adrfam": "IPv4", 00:23:14.798 "traddr": "10.0.0.2", 00:23:14.798 "trsvcid": "4420", 00:23:14.798 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:14.798 }, 00:23:14.798 "ctrlr_data": { 00:23:14.798 "cntlid": 2, 00:23:14.798 "vendor_id": "0x8086", 00:23:14.799 "model_number": "SPDK bdev Controller", 00:23:14.799 "serial_number": "00000000000000000000", 00:23:14.799 "firmware_revision": "25.01", 00:23:14.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.799 "oacs": { 00:23:14.799 "security": 0, 00:23:14.799 "format": 0, 00:23:14.799 "firmware": 0, 00:23:14.799 "ns_manage": 0 00:23:14.799 }, 00:23:14.799 "multi_ctrlr": true, 00:23:14.799 "ana_reporting": false 00:23:14.799 }, 00:23:14.799 "vs": { 00:23:14.799 "nvme_version": "1.3" 00:23:14.799 }, 00:23:14.799 "ns_data": { 00:23:14.799 "id": 1, 00:23:14.799 "can_share": true 00:23:14.799 } 00:23:14.799 } 00:23:14.799 ], 00:23:14.799 "mp_policy": "active_passive" 00:23:14.799 } 00:23:14.799 } 00:23:14.799 ] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7lejPZXCAY 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7lejPZXCAY 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.7lejPZXCAY 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 [2024-12-05 14:13:20.934831] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.799 [2024-12-05 14:13:20.934989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 [2024-12-05 14:13:20.958907] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.799 nvme0n1 00:23:14.799 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.799 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:14.799 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.799 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.799 [ 00:23:14.799 { 00:23:14.799 "name": "nvme0n1", 00:23:14.799 "aliases": [ 00:23:14.799 "7c2147ca-71b8-4930-ad09-42631bbd18d3" 00:23:14.799 ], 00:23:14.799 "product_name": "NVMe disk", 00:23:14.799 "block_size": 512, 00:23:14.799 "num_blocks": 2097152, 00:23:14.799 "uuid": "7c2147ca-71b8-4930-ad09-42631bbd18d3", 00:23:14.799 "numa_id": 0, 00:23:14.799 "assigned_rate_limits": { 00:23:14.799 "rw_ios_per_sec": 0, 00:23:14.799 "rw_mbytes_per_sec": 0, 00:23:14.799 "r_mbytes_per_sec": 0, 00:23:14.799 "w_mbytes_per_sec": 0 00:23:14.799 }, 00:23:14.799 "claimed": false, 00:23:14.799 "zoned": false, 00:23:14.799 "supported_io_types": { 00:23:14.799 "read": true, 00:23:14.799 "write": true, 00:23:14.799 "unmap": false, 00:23:14.799 "flush": true, 00:23:14.799 "reset": true, 00:23:14.799 "nvme_admin": true, 00:23:14.799 "nvme_io": true, 00:23:14.799 "nvme_io_md": false, 00:23:14.799 "write_zeroes": true, 00:23:14.799 "zcopy": false, 00:23:14.799 "get_zone_info": false, 00:23:14.799 "zone_management": false, 00:23:14.799 "zone_append": false, 00:23:14.799 "compare": true, 00:23:14.799 "compare_and_write": true, 00:23:14.799 "abort": true, 00:23:14.799 "seek_hole": false, 00:23:14.799 "seek_data": false, 00:23:14.799 "copy": true, 00:23:14.799 "nvme_iov_md": false 00:23:14.799 }, 00:23:14.799 "memory_domains": [ 00:23:14.799 { 00:23:14.799 "dma_device_id": "system", 00:23:14.799 "dma_device_type": 1 00:23:14.799 } 00:23:14.799 ], 00:23:14.799 "driver_specific": { 00:23:14.799 "nvme": [ 00:23:14.799 { 00:23:14.799 "trid": { 00:23:14.799 "trtype": "TCP", 00:23:14.799 "adrfam": "IPv4", 00:23:14.799 "traddr": "10.0.0.2", 00:23:14.799 "trsvcid": "4421", 00:23:14.799 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:14.799 }, 00:23:14.799 "ctrlr_data": { 00:23:14.799 "cntlid": 3, 00:23:14.799 "vendor_id": "0x8086", 00:23:14.799 "model_number": "SPDK bdev Controller", 00:23:14.799 "serial_number": "00000000000000000000", 00:23:14.799 "firmware_revision": "25.01", 00:23:14.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.799 "oacs": { 00:23:14.799 "security": 0, 00:23:14.799 "format": 0, 00:23:14.799 "firmware": 0, 00:23:14.799 "ns_manage": 0 00:23:14.799 }, 00:23:14.799 "multi_ctrlr": true, 00:23:14.799 "ana_reporting": false 00:23:14.799 }, 00:23:14.799 "vs": { 00:23:14.799 "nvme_version": "1.3" 00:23:14.799 }, 00:23:14.799 "ns_data": { 00:23:14.799 "id": 1, 00:23:14.799 "can_share": true 00:23:14.799 } 00:23:14.800 } 00:23:14.800 ], 00:23:14.800 "mp_policy": "active_passive" 00:23:14.800 } 00:23:14.800 } 00:23:14.800 ] 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.7lejPZXCAY 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:14.800 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:14.800 rmmod nvme_tcp 00:23:15.061 rmmod nvme_fabrics 00:23:15.061 rmmod nvme_keyring 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2815793 ']' 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2815793 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2815793 ']' 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2815793 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2815793 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2815793' 00:23:15.061 killing process with pid 2815793 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2815793 00:23:15.061 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2815793 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.322 14:13:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.234 00:23:17.234 real 0m11.802s 00:23:17.234 user 0m4.318s 00:23:17.234 sys 0m6.074s 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:17.234 ************************************ 00:23:17.234 END TEST nvmf_async_init 00:23:17.234 ************************************ 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.234 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.495 ************************************ 00:23:17.495 START TEST dma 00:23:17.495 ************************************ 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:17.495 * Looking for test storage... 00:23:17.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:17.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.495 --rc genhtml_branch_coverage=1 00:23:17.495 --rc genhtml_function_coverage=1 00:23:17.495 --rc genhtml_legend=1 00:23:17.495 --rc geninfo_all_blocks=1 00:23:17.495 --rc geninfo_unexecuted_blocks=1 00:23:17.495 00:23:17.495 ' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:17.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.495 --rc genhtml_branch_coverage=1 00:23:17.495 --rc genhtml_function_coverage=1 00:23:17.495 --rc genhtml_legend=1 00:23:17.495 --rc geninfo_all_blocks=1 00:23:17.495 --rc geninfo_unexecuted_blocks=1 00:23:17.495 00:23:17.495 ' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:17.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.495 --rc genhtml_branch_coverage=1 00:23:17.495 --rc genhtml_function_coverage=1 00:23:17.495 --rc genhtml_legend=1 00:23:17.495 --rc geninfo_all_blocks=1 00:23:17.495 --rc geninfo_unexecuted_blocks=1 00:23:17.495 00:23:17.495 ' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:17.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.495 --rc genhtml_branch_coverage=1 00:23:17.495 --rc genhtml_function_coverage=1 00:23:17.495 --rc genhtml_legend=1 00:23:17.495 --rc geninfo_all_blocks=1 00:23:17.495 --rc geninfo_unexecuted_blocks=1 00:23:17.495 00:23:17.495 ' 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.495 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:17.496 00:23:17.496 real 0m0.234s 00:23:17.496 user 0m0.136s 00:23:17.496 sys 0m0.113s 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.496 14:13:23 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:17.496 ************************************ 00:23:17.496 END TEST dma 00:23:17.496 ************************************ 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.757 ************************************ 00:23:17.757 START TEST nvmf_identify 00:23:17.757 ************************************ 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:17.757 * Looking for test storage... 00:23:17.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:17.757 14:13:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.757 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.018 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.018 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:18.018 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.018 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:18.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.019 --rc genhtml_branch_coverage=1 00:23:18.019 --rc genhtml_function_coverage=1 00:23:18.019 --rc genhtml_legend=1 00:23:18.019 --rc geninfo_all_blocks=1 00:23:18.019 --rc geninfo_unexecuted_blocks=1 00:23:18.019 00:23:18.019 ' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:18.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.019 --rc genhtml_branch_coverage=1 00:23:18.019 --rc genhtml_function_coverage=1 00:23:18.019 --rc genhtml_legend=1 00:23:18.019 --rc geninfo_all_blocks=1 00:23:18.019 --rc geninfo_unexecuted_blocks=1 00:23:18.019 00:23:18.019 ' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:18.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.019 --rc genhtml_branch_coverage=1 00:23:18.019 --rc genhtml_function_coverage=1 00:23:18.019 --rc genhtml_legend=1 00:23:18.019 --rc geninfo_all_blocks=1 00:23:18.019 --rc geninfo_unexecuted_blocks=1 00:23:18.019 00:23:18.019 ' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:18.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.019 --rc genhtml_branch_coverage=1 00:23:18.019 --rc genhtml_function_coverage=1 00:23:18.019 --rc genhtml_legend=1 00:23:18.019 --rc geninfo_all_blocks=1 00:23:18.019 --rc geninfo_unexecuted_blocks=1 00:23:18.019 00:23:18.019 ' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.019 14:13:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.334 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:26.335 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:26.335 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:26.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:26.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.747 ms 00:23:26.335 00:23:26.335 --- 10.0.0.2 ping statistics --- 00:23:26.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.335 rtt min/avg/max/mdev = 0.747/0.747/0.747/0.000 ms 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:23:26.335 00:23:26.335 --- 10.0.0.1 ping statistics --- 00:23:26.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.335 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2820330 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2820330 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2820330 ']' 00:23:26.335 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.336 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.336 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.336 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.336 14:13:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.336 [2024-12-05 14:13:31.711482] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:23:26.336 [2024-12-05 14:13:31.711556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.336 [2024-12-05 14:13:31.818526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.336 [2024-12-05 14:13:31.873107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.336 [2024-12-05 14:13:31.873162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.336 [2024-12-05 14:13:31.873171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.336 [2024-12-05 14:13:31.873179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.336 [2024-12-05 14:13:31.873185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.336 [2024-12-05 14:13:31.875276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.336 [2024-12-05 14:13:31.875435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.336 [2024-12-05 14:13:31.875601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.336 [2024-12-05 14:13:31.875774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.336 [2024-12-05 14:13:32.533264] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.336 Malloc0 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.336 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 [2024-12-05 14:13:32.654574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.601 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.601 [ 00:23:26.601 { 00:23:26.601 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:26.601 "subtype": "Discovery", 00:23:26.602 "listen_addresses": [ 00:23:26.602 { 00:23:26.602 "trtype": "TCP", 00:23:26.602 "adrfam": "IPv4", 00:23:26.602 "traddr": "10.0.0.2", 00:23:26.602 "trsvcid": "4420" 00:23:26.602 } 00:23:26.602 ], 00:23:26.602 "allow_any_host": true, 00:23:26.602 "hosts": [] 00:23:26.602 }, 00:23:26.602 { 00:23:26.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.602 "subtype": "NVMe", 00:23:26.602 "listen_addresses": [ 00:23:26.602 { 00:23:26.602 "trtype": "TCP", 00:23:26.602 "adrfam": "IPv4", 00:23:26.602 "traddr": "10.0.0.2", 00:23:26.602 "trsvcid": "4420" 00:23:26.602 } 00:23:26.602 ], 00:23:26.602 "allow_any_host": true, 00:23:26.602 "hosts": [], 00:23:26.602 "serial_number": "SPDK00000000000001", 00:23:26.602 "model_number": "SPDK bdev Controller", 00:23:26.602 "max_namespaces": 32, 00:23:26.602 "min_cntlid": 1, 00:23:26.602 "max_cntlid": 65519, 00:23:26.602 "namespaces": [ 00:23:26.602 { 00:23:26.602 "nsid": 1, 00:23:26.602 "bdev_name": "Malloc0", 00:23:26.602 "name": "Malloc0", 00:23:26.602 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:26.602 "eui64": "ABCDEF0123456789", 00:23:26.602 "uuid": "4a0f5028-72e3-4ffd-ba8e-891c5ebcaf9d" 00:23:26.602 } 00:23:26.602 ] 00:23:26.602 } 00:23:26.602 ] 00:23:26.602 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.602 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:26.602 [2024-12-05 14:13:32.719510] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:23:26.602 [2024-12-05 14:13:32.719581] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820640 ] 00:23:26.602 [2024-12-05 14:13:32.775134] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:26.602 [2024-12-05 14:13:32.775210] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:26.602 [2024-12-05 14:13:32.775216] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:26.602 [2024-12-05 14:13:32.775236] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:26.602 [2024-12-05 14:13:32.775249] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:26.602 [2024-12-05 14:13:32.778906] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:26.602 [2024-12-05 14:13:32.778965] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x720690 0 00:23:26.602 [2024-12-05 14:13:32.786471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:26.602 [2024-12-05 14:13:32.786490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:26.602 [2024-12-05 14:13:32.786495] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:26.602 [2024-12-05 14:13:32.786500] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:26.602 [2024-12-05 14:13:32.786547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.786555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.786560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.602 [2024-12-05 14:13:32.786578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:26.602 [2024-12-05 14:13:32.786602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.602 [2024-12-05 14:13:32.794467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.602 [2024-12-05 14:13:32.794477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.602 [2024-12-05 14:13:32.794481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.794487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.602 [2024-12-05 14:13:32.794503] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:26.602 [2024-12-05 14:13:32.794513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:26.602 [2024-12-05 14:13:32.794519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:26.602 [2024-12-05 14:13:32.794542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.794547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.794551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.602 [2024-12-05 14:13:32.794560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.602 [2024-12-05 14:13:32.794578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.602 [2024-12-05 14:13:32.794803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.602 [2024-12-05 14:13:32.794810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.602 [2024-12-05 14:13:32.794814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.794818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.602 [2024-12-05 14:13:32.794825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:26.602 [2024-12-05 14:13:32.794834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:26.602 [2024-12-05 14:13:32.794842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.794846] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.794849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.602 [2024-12-05 14:13:32.794856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.602 [2024-12-05 14:13:32.794868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.602 [2024-12-05 14:13:32.795060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.602 [2024-12-05 14:13:32.795067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.602 [2024-12-05 14:13:32.795071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.602 [2024-12-05 14:13:32.795081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:26.602 [2024-12-05 14:13:32.795090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:26.602 [2024-12-05 14:13:32.795097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.602 [2024-12-05 14:13:32.795111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.602 [2024-12-05 14:13:32.795123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.602 [2024-12-05 14:13:32.795364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.602 [2024-12-05 14:13:32.795371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.602 [2024-12-05 14:13:32.795375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.602 [2024-12-05 14:13:32.795385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:26.602 [2024-12-05 14:13:32.795396] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.602 [2024-12-05 14:13:32.795414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.602 [2024-12-05 14:13:32.795425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.602 [2024-12-05 14:13:32.795667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.602 [2024-12-05 14:13:32.795674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.602 [2024-12-05 14:13:32.795678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.602 [2024-12-05 14:13:32.795687] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:26.602 [2024-12-05 14:13:32.795693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:26.602 [2024-12-05 14:13:32.795701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:26.602 [2024-12-05 14:13:32.795813] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:26.602 [2024-12-05 14:13:32.795819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:26.602 [2024-12-05 14:13:32.795830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.602 [2024-12-05 14:13:32.795837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.602 [2024-12-05 14:13:32.795844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.602 [2024-12-05 14:13:32.795856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.602 [2024-12-05 14:13:32.796024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.602 [2024-12-05 14:13:32.796030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.602 [2024-12-05 14:13:32.796034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.603 [2024-12-05 14:13:32.796044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:26.603 [2024-12-05 14:13:32.796054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.796069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.603 [2024-12-05 14:13:32.796080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.603 [2024-12-05 14:13:32.796274] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.603 [2024-12-05 14:13:32.796281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.603 [2024-12-05 14:13:32.796284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.603 [2024-12-05 14:13:32.796294] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:26.603 [2024-12-05 14:13:32.796299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:26.603 [2024-12-05 14:13:32.796311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:26.603 [2024-12-05 14:13:32.796328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:26.603 [2024-12-05 14:13:32.796341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.796353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.603 [2024-12-05 14:13:32.796364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.603 [2024-12-05 14:13:32.796581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.603 [2024-12-05 14:13:32.796588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.603 [2024-12-05 14:13:32.796593] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796597] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x720690): datao=0, datal=4096, cccid=0 00:23:26.603 [2024-12-05 14:13:32.796602] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x782100) on tqpair(0x720690): expected_datao=0, payload_size=4096 00:23:26.603 [2024-12-05 14:13:32.796607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796617] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796622] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.603 [2024-12-05 14:13:32.796841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.603 [2024-12-05 14:13:32.796845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.603 [2024-12-05 14:13:32.796858] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:26.603 [2024-12-05 14:13:32.796864] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:26.603 [2024-12-05 14:13:32.796869] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:26.603 [2024-12-05 14:13:32.796875] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:26.603 [2024-12-05 14:13:32.796880] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:26.603 [2024-12-05 14:13:32.796885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:26.603 [2024-12-05 14:13:32.796894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:26.603 [2024-12-05 14:13:32.796902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.796911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.796918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.603 [2024-12-05 14:13:32.796930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.603 [2024-12-05 14:13:32.797185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.603 [2024-12-05 14:13:32.797194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.603 [2024-12-05 14:13:32.797198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.603 [2024-12-05 14:13:32.797211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.797226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.603 [2024-12-05 14:13:32.797233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.797247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.603 [2024-12-05 14:13:32.797253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.797267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.603 [2024-12-05 14:13:32.797274] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.797287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.603 [2024-12-05 14:13:32.797292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:26.603 [2024-12-05 14:13:32.797309] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:26.603 [2024-12-05 14:13:32.797316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.797327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.603 [2024-12-05 14:13:32.797339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782100, cid 0, qid 0 00:23:26.603 [2024-12-05 14:13:32.797345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782280, cid 1, qid 0 00:23:26.603 [2024-12-05 14:13:32.797350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782400, cid 2, qid 0 00:23:26.603 [2024-12-05 14:13:32.797355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.603 [2024-12-05 14:13:32.797360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782700, cid 4, qid 0 00:23:26.603 [2024-12-05 14:13:32.797577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.603 [2024-12-05 14:13:32.797585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.603 [2024-12-05 14:13:32.797588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782700) on tqpair=0x720690 00:23:26.603 [2024-12-05 14:13:32.797598] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:26.603 [2024-12-05 14:13:32.797607] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:26.603 [2024-12-05 14:13:32.797618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x720690) 00:23:26.603 [2024-12-05 14:13:32.797629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.603 [2024-12-05 14:13:32.797640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782700, cid 4, qid 0 00:23:26.603 [2024-12-05 14:13:32.797829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.603 [2024-12-05 14:13:32.797837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.603 [2024-12-05 14:13:32.797840] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797844] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x720690): datao=0, datal=4096, cccid=4 00:23:26.603 [2024-12-05 14:13:32.797849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x782700) on tqpair(0x720690): expected_datao=0, payload_size=4096 00:23:26.603 [2024-12-05 14:13:32.797853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797880] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.797885] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.798027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.603 [2024-12-05 14:13:32.798034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.603 [2024-12-05 14:13:32.798037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.798041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782700) on tqpair=0x720690 00:23:26.603 [2024-12-05 14:13:32.798056] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:26.603 [2024-12-05 14:13:32.798085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.603 [2024-12-05 14:13:32.798090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x720690) 00:23:26.604 [2024-12-05 14:13:32.798097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.604 [2024-12-05 14:13:32.798105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x720690) 00:23:26.604 [2024-12-05 14:13:32.798118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.604 [2024-12-05 14:13:32.798133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782700, cid 4, qid 0 00:23:26.604 [2024-12-05 14:13:32.798139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782880, cid 5, qid 0 00:23:26.604 [2024-12-05 14:13:32.798375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.604 [2024-12-05 14:13:32.798382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.604 [2024-12-05 14:13:32.798385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798389] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x720690): datao=0, datal=1024, cccid=4 00:23:26.604 [2024-12-05 14:13:32.798394] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x782700) on tqpair(0x720690): expected_datao=0, payload_size=1024 00:23:26.604 [2024-12-05 14:13:32.798398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798406] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798409] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.604 [2024-12-05 14:13:32.798425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.604 [2024-12-05 14:13:32.798428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.798432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782880) on tqpair=0x720690 00:23:26.604 [2024-12-05 14:13:32.842462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.604 [2024-12-05 14:13:32.842474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.604 [2024-12-05 14:13:32.842478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782700) on tqpair=0x720690 00:23:26.604 [2024-12-05 14:13:32.842496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x720690) 00:23:26.604 [2024-12-05 14:13:32.842507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.604 [2024-12-05 14:13:32.842524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782700, cid 4, qid 0 00:23:26.604 [2024-12-05 14:13:32.842746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.604 [2024-12-05 14:13:32.842753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.604 [2024-12-05 14:13:32.842756] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842760] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x720690): datao=0, datal=3072, cccid=4 00:23:26.604 [2024-12-05 14:13:32.842765] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x782700) on tqpair(0x720690): expected_datao=0, payload_size=3072 00:23:26.604 [2024-12-05 14:13:32.842769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842776] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842780] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.604 [2024-12-05 14:13:32.842983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.604 [2024-12-05 14:13:32.842986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.842990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782700) on tqpair=0x720690 00:23:26.604 [2024-12-05 14:13:32.842999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.843003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x720690) 00:23:26.604 [2024-12-05 14:13:32.843009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.604 [2024-12-05 14:13:32.843023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782700, cid 4, qid 0 00:23:26.604 [2024-12-05 14:13:32.843304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.604 [2024-12-05 14:13:32.843310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.604 [2024-12-05 14:13:32.843314] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.843318] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x720690): datao=0, datal=8, cccid=4 00:23:26.604 [2024-12-05 14:13:32.843322] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x782700) on tqpair(0x720690): expected_datao=0, payload_size=8 00:23:26.604 [2024-12-05 14:13:32.843326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.843333] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.843336] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.888467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.604 [2024-12-05 14:13:32.888484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.604 [2024-12-05 14:13:32.888487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.604 [2024-12-05 14:13:32.888491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782700) on tqpair=0x720690 00:23:26.604 ===================================================== 00:23:26.604 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:26.604 ===================================================== 00:23:26.604 Controller Capabilities/Features 00:23:26.604 ================================ 00:23:26.604 Vendor ID: 0000 00:23:26.604 Subsystem Vendor ID: 0000 00:23:26.604 Serial Number: .................... 00:23:26.604 Model Number: ........................................ 00:23:26.604 Firmware Version: 25.01 00:23:26.604 Recommended Arb Burst: 0 00:23:26.604 IEEE OUI Identifier: 00 00 00 00:23:26.604 Multi-path I/O 00:23:26.604 May have multiple subsystem ports: No 00:23:26.604 May have multiple controllers: No 00:23:26.604 Associated with SR-IOV VF: No 00:23:26.604 Max Data Transfer Size: 131072 00:23:26.604 Max Number of Namespaces: 0 00:23:26.604 Max Number of I/O Queues: 1024 00:23:26.604 NVMe Specification Version (VS): 1.3 00:23:26.604 NVMe Specification Version (Identify): 1.3 00:23:26.604 Maximum Queue Entries: 128 00:23:26.604 Contiguous Queues Required: Yes 00:23:26.604 Arbitration Mechanisms Supported 00:23:26.604 Weighted Round Robin: Not Supported 00:23:26.604 Vendor Specific: Not Supported 00:23:26.604 Reset Timeout: 15000 ms 00:23:26.604 Doorbell Stride: 4 bytes 00:23:26.604 NVM Subsystem Reset: Not Supported 00:23:26.604 Command Sets Supported 00:23:26.604 NVM Command Set: Supported 00:23:26.604 Boot Partition: Not Supported 00:23:26.604 Memory Page Size Minimum: 4096 bytes 00:23:26.604 Memory Page Size Maximum: 4096 bytes 00:23:26.604 Persistent Memory Region: Not Supported 00:23:26.604 Optional Asynchronous Events Supported 00:23:26.604 Namespace Attribute Notices: Not Supported 00:23:26.604 Firmware Activation Notices: Not Supported 00:23:26.604 ANA Change Notices: Not Supported 00:23:26.604 PLE Aggregate Log Change Notices: Not Supported 00:23:26.604 LBA Status Info Alert Notices: Not Supported 00:23:26.604 EGE Aggregate Log Change Notices: Not Supported 00:23:26.604 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.604 Zone Descriptor Change Notices: Not Supported 00:23:26.604 Discovery Log Change Notices: Supported 00:23:26.604 Controller Attributes 00:23:26.604 128-bit Host Identifier: Not Supported 00:23:26.604 Non-Operational Permissive Mode: Not Supported 00:23:26.604 NVM Sets: Not Supported 00:23:26.604 Read Recovery Levels: Not Supported 00:23:26.604 Endurance Groups: Not Supported 00:23:26.604 Predictable Latency Mode: Not Supported 00:23:26.604 Traffic Based Keep ALive: Not Supported 00:23:26.604 Namespace Granularity: Not Supported 00:23:26.604 SQ Associations: Not Supported 00:23:26.604 UUID List: Not Supported 00:23:26.604 Multi-Domain Subsystem: Not Supported 00:23:26.604 Fixed Capacity Management: Not Supported 00:23:26.604 Variable Capacity Management: Not Supported 00:23:26.604 Delete Endurance Group: Not Supported 00:23:26.604 Delete NVM Set: Not Supported 00:23:26.604 Extended LBA Formats Supported: Not Supported 00:23:26.604 Flexible Data Placement Supported: Not Supported 00:23:26.604 00:23:26.604 Controller Memory Buffer Support 00:23:26.604 ================================ 00:23:26.604 Supported: No 00:23:26.604 00:23:26.604 Persistent Memory Region Support 00:23:26.604 ================================ 00:23:26.604 Supported: No 00:23:26.604 00:23:26.604 Admin Command Set Attributes 00:23:26.604 ============================ 00:23:26.604 Security Send/Receive: Not Supported 00:23:26.604 Format NVM: Not Supported 00:23:26.604 Firmware Activate/Download: Not Supported 00:23:26.604 Namespace Management: Not Supported 00:23:26.604 Device Self-Test: Not Supported 00:23:26.604 Directives: Not Supported 00:23:26.604 NVMe-MI: Not Supported 00:23:26.604 Virtualization Management: Not Supported 00:23:26.604 Doorbell Buffer Config: Not Supported 00:23:26.604 Get LBA Status Capability: Not Supported 00:23:26.604 Command & Feature Lockdown Capability: Not Supported 00:23:26.604 Abort Command Limit: 1 00:23:26.604 Async Event Request Limit: 4 00:23:26.604 Number of Firmware Slots: N/A 00:23:26.604 Firmware Slot 1 Read-Only: N/A 00:23:26.604 Firmware Activation Without Reset: N/A 00:23:26.604 Multiple Update Detection Support: N/A 00:23:26.605 Firmware Update Granularity: No Information Provided 00:23:26.605 Per-Namespace SMART Log: No 00:23:26.605 Asymmetric Namespace Access Log Page: Not Supported 00:23:26.605 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:26.605 Command Effects Log Page: Not Supported 00:23:26.605 Get Log Page Extended Data: Supported 00:23:26.605 Telemetry Log Pages: Not Supported 00:23:26.605 Persistent Event Log Pages: Not Supported 00:23:26.605 Supported Log Pages Log Page: May Support 00:23:26.605 Commands Supported & Effects Log Page: Not Supported 00:23:26.605 Feature Identifiers & Effects Log Page:May Support 00:23:26.605 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.605 Data Area 4 for Telemetry Log: Not Supported 00:23:26.605 Error Log Page Entries Supported: 128 00:23:26.605 Keep Alive: Not Supported 00:23:26.605 00:23:26.605 NVM Command Set Attributes 00:23:26.605 ========================== 00:23:26.605 Submission Queue Entry Size 00:23:26.605 Max: 1 00:23:26.605 Min: 1 00:23:26.605 Completion Queue Entry Size 00:23:26.605 Max: 1 00:23:26.605 Min: 1 00:23:26.605 Number of Namespaces: 0 00:23:26.605 Compare Command: Not Supported 00:23:26.605 Write Uncorrectable Command: Not Supported 00:23:26.605 Dataset Management Command: Not Supported 00:23:26.605 Write Zeroes Command: Not Supported 00:23:26.605 Set Features Save Field: Not Supported 00:23:26.605 Reservations: Not Supported 00:23:26.605 Timestamp: Not Supported 00:23:26.605 Copy: Not Supported 00:23:26.605 Volatile Write Cache: Not Present 00:23:26.605 Atomic Write Unit (Normal): 1 00:23:26.605 Atomic Write Unit (PFail): 1 00:23:26.605 Atomic Compare & Write Unit: 1 00:23:26.605 Fused Compare & Write: Supported 00:23:26.605 Scatter-Gather List 00:23:26.605 SGL Command Set: Supported 00:23:26.605 SGL Keyed: Supported 00:23:26.605 SGL Bit Bucket Descriptor: Not Supported 00:23:26.605 SGL Metadata Pointer: Not Supported 00:23:26.605 Oversized SGL: Not Supported 00:23:26.605 SGL Metadata Address: Not Supported 00:23:26.605 SGL Offset: Supported 00:23:26.605 Transport SGL Data Block: Not Supported 00:23:26.605 Replay Protected Memory Block: Not Supported 00:23:26.605 00:23:26.605 Firmware Slot Information 00:23:26.605 ========================= 00:23:26.605 Active slot: 0 00:23:26.605 00:23:26.605 00:23:26.605 Error Log 00:23:26.605 ========= 00:23:26.605 00:23:26.605 Active Namespaces 00:23:26.605 ================= 00:23:26.605 Discovery Log Page 00:23:26.605 ================== 00:23:26.605 Generation Counter: 2 00:23:26.605 Number of Records: 2 00:23:26.605 Record Format: 0 00:23:26.605 00:23:26.605 Discovery Log Entry 0 00:23:26.605 ---------------------- 00:23:26.605 Transport Type: 3 (TCP) 00:23:26.605 Address Family: 1 (IPv4) 00:23:26.605 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:26.605 Entry Flags: 00:23:26.605 Duplicate Returned Information: 1 00:23:26.605 Explicit Persistent Connection Support for Discovery: 1 00:23:26.605 Transport Requirements: 00:23:26.605 Secure Channel: Not Required 00:23:26.605 Port ID: 0 (0x0000) 00:23:26.605 Controller ID: 65535 (0xffff) 00:23:26.605 Admin Max SQ Size: 128 00:23:26.605 Transport Service Identifier: 4420 00:23:26.605 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:26.605 Transport Address: 10.0.0.2 00:23:26.605 Discovery Log Entry 1 00:23:26.605 ---------------------- 00:23:26.605 Transport Type: 3 (TCP) 00:23:26.605 Address Family: 1 (IPv4) 00:23:26.605 Subsystem Type: 2 (NVM Subsystem) 00:23:26.605 Entry Flags: 00:23:26.605 Duplicate Returned Information: 0 00:23:26.605 Explicit Persistent Connection Support for Discovery: 0 00:23:26.605 Transport Requirements: 00:23:26.605 Secure Channel: Not Required 00:23:26.605 Port ID: 0 (0x0000) 00:23:26.605 Controller ID: 65535 (0xffff) 00:23:26.605 Admin Max SQ Size: 128 00:23:26.605 Transport Service Identifier: 4420 00:23:26.605 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:26.605 Transport Address: 10.0.0.2 [2024-12-05 14:13:32.888592] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:26.605 [2024-12-05 14:13:32.888606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782100) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.888614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.605 [2024-12-05 14:13:32.888620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782280) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.888624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.605 [2024-12-05 14:13:32.888629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782400) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.888634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.605 [2024-12-05 14:13:32.888639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.888643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.605 [2024-12-05 14:13:32.888654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.888658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.888662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.605 [2024-12-05 14:13:32.888670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.605 [2024-12-05 14:13:32.888686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.605 [2024-12-05 14:13:32.888937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.605 [2024-12-05 14:13:32.888943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.605 [2024-12-05 14:13:32.888947] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.888951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.888958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.888962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.888966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.605 [2024-12-05 14:13:32.888972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.605 [2024-12-05 14:13:32.888986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.605 [2024-12-05 14:13:32.889254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.605 [2024-12-05 14:13:32.889260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.605 [2024-12-05 14:13:32.889264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.889273] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:26.605 [2024-12-05 14:13:32.889278] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:26.605 [2024-12-05 14:13:32.889288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.605 [2024-12-05 14:13:32.889305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.605 [2024-12-05 14:13:32.889316] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.605 [2024-12-05 14:13:32.889559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.605 [2024-12-05 14:13:32.889566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.605 [2024-12-05 14:13:32.889570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.889584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.605 [2024-12-05 14:13:32.889599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.605 [2024-12-05 14:13:32.889610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.605 [2024-12-05 14:13:32.889835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.605 [2024-12-05 14:13:32.889841] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.605 [2024-12-05 14:13:32.889844] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.605 [2024-12-05 14:13:32.889859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.605 [2024-12-05 14:13:32.889867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.605 [2024-12-05 14:13:32.889874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.605 [2024-12-05 14:13:32.889885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.605 [2024-12-05 14:13:32.890057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.890064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.890067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.890081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.890095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.890106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.890313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.890320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.890323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.890336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.890353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.890364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.890567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.890574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.890578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.890591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.890605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.890616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.890869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.890875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.890878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.890892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.890899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.890906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.890916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.891099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.891105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.891108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.891122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.891136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.891147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.891321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.891328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.891331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.891344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.891358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.891372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.891626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.891632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.891636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.891650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.891664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.891675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.891876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.891882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.891886] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.891899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.891907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.891914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.891924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.892087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.892093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.892097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.892100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.606 [2024-12-05 14:13:32.892110] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.892114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.606 [2024-12-05 14:13:32.892117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.606 [2024-12-05 14:13:32.892124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.606 [2024-12-05 14:13:32.892135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.606 [2024-12-05 14:13:32.892329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.606 [2024-12-05 14:13:32.892336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.606 [2024-12-05 14:13:32.892339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.607 [2024-12-05 14:13:32.892343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.607 [2024-12-05 14:13:32.892352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.607 [2024-12-05 14:13:32.892356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.607 [2024-12-05 14:13:32.892360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x720690) 00:23:26.607 [2024-12-05 14:13:32.892367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.607 [2024-12-05 14:13:32.892377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x782580, cid 3, qid 0 00:23:26.871 [2024-12-05 14:13:32.896465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.871 [2024-12-05 14:13:32.896475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.871 [2024-12-05 14:13:32.896479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.871 [2024-12-05 14:13:32.896483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x782580) on tqpair=0x720690 00:23:26.871 [2024-12-05 14:13:32.896492] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:26.871 00:23:26.871 14:13:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:26.871 [2024-12-05 14:13:32.942733] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:23:26.871 [2024-12-05 14:13:32.942781] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2820645 ] 00:23:26.871 [2024-12-05 14:13:32.999998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:26.871 [2024-12-05 14:13:33.000062] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:26.871 [2024-12-05 14:13:33.000068] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:26.871 [2024-12-05 14:13:33.000088] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:26.871 [2024-12-05 14:13:33.000098] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:26.871 [2024-12-05 14:13:33.000770] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:26.871 [2024-12-05 14:13:33.000813] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1df9690 0 00:23:26.871 [2024-12-05 14:13:33.011466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:26.871 [2024-12-05 14:13:33.011482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:26.871 [2024-12-05 14:13:33.011486] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:26.871 [2024-12-05 14:13:33.011490] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:26.872 [2024-12-05 14:13:33.011529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.011535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.011539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.011552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:26.872 [2024-12-05 14:13:33.011577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.022467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.022477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.022481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.022486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.022497] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:26.872 [2024-12-05 14:13:33.022505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:26.872 [2024-12-05 14:13:33.022510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:26.872 [2024-12-05 14:13:33.022530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.022534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.022538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.022547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.022565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.022784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.022791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.022795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.022799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.022805] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:26.872 [2024-12-05 14:13:33.022812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:26.872 [2024-12-05 14:13:33.022820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.022824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.022827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.022834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.022846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.023008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.023014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.023017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.023027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:26.872 [2024-12-05 14:13:33.023035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:26.872 [2024-12-05 14:13:33.023042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.023056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.023067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.023236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.023242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.023245] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.023254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:26.872 [2024-12-05 14:13:33.023264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.023282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.023293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.023473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.023480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.023483] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.023492] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:26.872 [2024-12-05 14:13:33.023497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:26.872 [2024-12-05 14:13:33.023505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:26.872 [2024-12-05 14:13:33.023613] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:26.872 [2024-12-05 14:13:33.023619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:26.872 [2024-12-05 14:13:33.023627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.023641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.023652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.023857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.023863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.023866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.023875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:26.872 [2024-12-05 14:13:33.023885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.023892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.023899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.023910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.024074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.024080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.024083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.024087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.024092] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:26.872 [2024-12-05 14:13:33.024097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:26.872 [2024-12-05 14:13:33.024108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:26.872 [2024-12-05 14:13:33.024116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:26.872 [2024-12-05 14:13:33.024130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.024134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.872 [2024-12-05 14:13:33.024141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.872 [2024-12-05 14:13:33.024153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.872 [2024-12-05 14:13:33.024393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.872 [2024-12-05 14:13:33.024400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.872 [2024-12-05 14:13:33.024403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.024407] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=4096, cccid=0 00:23:26.872 [2024-12-05 14:13:33.024412] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5b100) on tqpair(0x1df9690): expected_datao=0, payload_size=4096 00:23:26.872 [2024-12-05 14:13:33.024417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.024430] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.024435] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.065605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.872 [2024-12-05 14:13:33.065617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.872 [2024-12-05 14:13:33.065620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.872 [2024-12-05 14:13:33.065625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.872 [2024-12-05 14:13:33.065635] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:26.872 [2024-12-05 14:13:33.065639] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:26.872 [2024-12-05 14:13:33.065644] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:26.872 [2024-12-05 14:13:33.065649] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:26.873 [2024-12-05 14:13:33.065653] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:26.873 [2024-12-05 14:13:33.065659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.065668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.065675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.065691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.873 [2024-12-05 14:13:33.065705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.873 [2024-12-05 14:13:33.065893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.873 [2024-12-05 14:13:33.065899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.873 [2024-12-05 14:13:33.065903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.873 [2024-12-05 14:13:33.065918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.065932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.873 [2024-12-05 14:13:33.065939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.065952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.873 [2024-12-05 14:13:33.065958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.065971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.873 [2024-12-05 14:13:33.065978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.065985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.065991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.873 [2024-12-05 14:13:33.065996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.066026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.873 [2024-12-05 14:13:33.066038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b100, cid 0, qid 0 00:23:26.873 [2024-12-05 14:13:33.066044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b280, cid 1, qid 0 00:23:26.873 [2024-12-05 14:13:33.066048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b400, cid 2, qid 0 00:23:26.873 [2024-12-05 14:13:33.066053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.873 [2024-12-05 14:13:33.066058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.873 [2024-12-05 14:13:33.066288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.873 [2024-12-05 14:13:33.066294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.873 [2024-12-05 14:13:33.066298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.873 [2024-12-05 14:13:33.066306] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:26.873 [2024-12-05 14:13:33.066311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.066354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:26.873 [2024-12-05 14:13:33.066365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.873 [2024-12-05 14:13:33.066552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.873 [2024-12-05 14:13:33.066559] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.873 [2024-12-05 14:13:33.066563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.873 [2024-12-05 14:13:33.066637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.066656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.066667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.873 [2024-12-05 14:13:33.066678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.873 [2024-12-05 14:13:33.066907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.873 [2024-12-05 14:13:33.066913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.873 [2024-12-05 14:13:33.066917] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066920] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=4096, cccid=4 00:23:26.873 [2024-12-05 14:13:33.066925] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5b700) on tqpair(0x1df9690): expected_datao=0, payload_size=4096 00:23:26.873 [2024-12-05 14:13:33.066930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066937] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.066941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.067050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.873 [2024-12-05 14:13:33.067056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.873 [2024-12-05 14:13:33.067060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.067064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.873 [2024-12-05 14:13:33.067076] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:26.873 [2024-12-05 14:13:33.067092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.067102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.067109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.067116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.067122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.873 [2024-12-05 14:13:33.067135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.873 [2024-12-05 14:13:33.067337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.873 [2024-12-05 14:13:33.067343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.873 [2024-12-05 14:13:33.067347] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.067351] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=4096, cccid=4 00:23:26.873 [2024-12-05 14:13:33.067355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5b700) on tqpair(0x1df9690): expected_datao=0, payload_size=4096 00:23:26.873 [2024-12-05 14:13:33.067359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.067371] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.067375] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.111462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.873 [2024-12-05 14:13:33.111473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.873 [2024-12-05 14:13:33.111477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.111481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.873 [2024-12-05 14:13:33.111494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.111504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:26.873 [2024-12-05 14:13:33.111513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.873 [2024-12-05 14:13:33.111517] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.873 [2024-12-05 14:13:33.111524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.873 [2024-12-05 14:13:33.111538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.874 [2024-12-05 14:13:33.111730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.874 [2024-12-05 14:13:33.111736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.874 [2024-12-05 14:13:33.111740] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.111744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=4096, cccid=4 00:23:26.874 [2024-12-05 14:13:33.111748] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5b700) on tqpair(0x1df9690): expected_datao=0, payload_size=4096 00:23:26.874 [2024-12-05 14:13:33.111752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.111759] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.111763] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.111904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.874 [2024-12-05 14:13:33.111910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.874 [2024-12-05 14:13:33.111914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.111918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.874 [2024-12-05 14:13:33.111931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111977] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:26.874 [2024-12-05 14:13:33.111982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:26.874 [2024-12-05 14:13:33.111987] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:26.874 [2024-12-05 14:13:33.112005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.112016] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.112023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112031] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.112037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.874 [2024-12-05 14:13:33.112052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.874 [2024-12-05 14:13:33.112057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b880, cid 5, qid 0 00:23:26.874 [2024-12-05 14:13:33.112268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.874 [2024-12-05 14:13:33.112274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.874 [2024-12-05 14:13:33.112278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112282] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.874 [2024-12-05 14:13:33.112289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.874 [2024-12-05 14:13:33.112295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.874 [2024-12-05 14:13:33.112298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b880) on tqpair=0x1df9690 00:23:26.874 [2024-12-05 14:13:33.112311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.112322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.112332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b880, cid 5, qid 0 00:23:26.874 [2024-12-05 14:13:33.112520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.874 [2024-12-05 14:13:33.112526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.874 [2024-12-05 14:13:33.112530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b880) on tqpair=0x1df9690 00:23:26.874 [2024-12-05 14:13:33.112545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.112556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.112567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b880, cid 5, qid 0 00:23:26.874 [2024-12-05 14:13:33.112762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.874 [2024-12-05 14:13:33.112769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.874 [2024-12-05 14:13:33.112772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b880) on tqpair=0x1df9690 00:23:26.874 [2024-12-05 14:13:33.112785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.112789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.112796] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.112806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b880, cid 5, qid 0 00:23:26.874 [2024-12-05 14:13:33.112997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.874 [2024-12-05 14:13:33.113003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.874 [2024-12-05 14:13:33.113006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b880) on tqpair=0x1df9690 00:23:26.874 [2024-12-05 14:13:33.113026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.113037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.113045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.113055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.113063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.113073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.113080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df9690) 00:23:26.874 [2024-12-05 14:13:33.113090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.874 [2024-12-05 14:13:33.113103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b880, cid 5, qid 0 00:23:26.874 [2024-12-05 14:13:33.113108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b700, cid 4, qid 0 00:23:26.874 [2024-12-05 14:13:33.113113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5ba00, cid 6, qid 0 00:23:26.874 [2024-12-05 14:13:33.113117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5bb80, cid 7, qid 0 00:23:26.874 [2024-12-05 14:13:33.113411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.874 [2024-12-05 14:13:33.113423] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.874 [2024-12-05 14:13:33.113426] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113430] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=8192, cccid=5 00:23:26.874 [2024-12-05 14:13:33.113434] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5b880) on tqpair(0x1df9690): expected_datao=0, payload_size=8192 00:23:26.874 [2024-12-05 14:13:33.113439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113519] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113523] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.874 [2024-12-05 14:13:33.113535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.874 [2024-12-05 14:13:33.113539] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113543] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=512, cccid=4 00:23:26.874 [2024-12-05 14:13:33.113547] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5b700) on tqpair(0x1df9690): expected_datao=0, payload_size=512 00:23:26.874 [2024-12-05 14:13:33.113552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113558] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113562] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.874 [2024-12-05 14:13:33.113573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.874 [2024-12-05 14:13:33.113577] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113580] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=512, cccid=6 00:23:26.874 [2024-12-05 14:13:33.113585] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5ba00) on tqpair(0x1df9690): expected_datao=0, payload_size=512 00:23:26.874 [2024-12-05 14:13:33.113589] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.874 [2024-12-05 14:13:33.113595] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113599] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:26.875 [2024-12-05 14:13:33.113611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:26.875 [2024-12-05 14:13:33.113614] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113618] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df9690): datao=0, datal=4096, cccid=7 00:23:26.875 [2024-12-05 14:13:33.113622] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5bb80) on tqpair(0x1df9690): expected_datao=0, payload_size=4096 00:23:26.875 [2024-12-05 14:13:33.113626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113633] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113637] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.875 [2024-12-05 14:13:33.113657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.875 [2024-12-05 14:13:33.113661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b880) on tqpair=0x1df9690 00:23:26.875 [2024-12-05 14:13:33.113677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.875 [2024-12-05 14:13:33.113684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.875 [2024-12-05 14:13:33.113687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b700) on tqpair=0x1df9690 00:23:26.875 [2024-12-05 14:13:33.113704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.875 [2024-12-05 14:13:33.113710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.875 [2024-12-05 14:13:33.113713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5ba00) on tqpair=0x1df9690 00:23:26.875 [2024-12-05 14:13:33.113724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.875 [2024-12-05 14:13:33.113730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.875 [2024-12-05 14:13:33.113734] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.875 [2024-12-05 14:13:33.113737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5bb80) on tqpair=0x1df9690 00:23:26.875 ===================================================== 00:23:26.875 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:26.875 ===================================================== 00:23:26.875 Controller Capabilities/Features 00:23:26.875 ================================ 00:23:26.875 Vendor ID: 8086 00:23:26.875 Subsystem Vendor ID: 8086 00:23:26.875 Serial Number: SPDK00000000000001 00:23:26.875 Model Number: SPDK bdev Controller 00:23:26.875 Firmware Version: 25.01 00:23:26.875 Recommended Arb Burst: 6 00:23:26.875 IEEE OUI Identifier: e4 d2 5c 00:23:26.875 Multi-path I/O 00:23:26.875 May have multiple subsystem ports: Yes 00:23:26.875 May have multiple controllers: Yes 00:23:26.875 Associated with SR-IOV VF: No 00:23:26.875 Max Data Transfer Size: 131072 00:23:26.875 Max Number of Namespaces: 32 00:23:26.875 Max Number of I/O Queues: 127 00:23:26.875 NVMe Specification Version (VS): 1.3 00:23:26.875 NVMe Specification Version (Identify): 1.3 00:23:26.875 Maximum Queue Entries: 128 00:23:26.875 Contiguous Queues Required: Yes 00:23:26.875 Arbitration Mechanisms Supported 00:23:26.875 Weighted Round Robin: Not Supported 00:23:26.875 Vendor Specific: Not Supported 00:23:26.875 Reset Timeout: 15000 ms 00:23:26.875 Doorbell Stride: 4 bytes 00:23:26.875 NVM Subsystem Reset: Not Supported 00:23:26.875 Command Sets Supported 00:23:26.875 NVM Command Set: Supported 00:23:26.875 Boot Partition: Not Supported 00:23:26.875 Memory Page Size Minimum: 4096 bytes 00:23:26.875 Memory Page Size Maximum: 4096 bytes 00:23:26.875 Persistent Memory Region: Not Supported 00:23:26.875 Optional Asynchronous Events Supported 00:23:26.875 Namespace Attribute Notices: Supported 00:23:26.875 Firmware Activation Notices: Not Supported 00:23:26.875 ANA Change Notices: Not Supported 00:23:26.875 PLE Aggregate Log Change Notices: Not Supported 00:23:26.875 LBA Status Info Alert Notices: Not Supported 00:23:26.875 EGE Aggregate Log Change Notices: Not Supported 00:23:26.875 Normal NVM Subsystem Shutdown event: Not Supported 00:23:26.875 Zone Descriptor Change Notices: Not Supported 00:23:26.875 Discovery Log Change Notices: Not Supported 00:23:26.875 Controller Attributes 00:23:26.875 128-bit Host Identifier: Supported 00:23:26.875 Non-Operational Permissive Mode: Not Supported 00:23:26.875 NVM Sets: Not Supported 00:23:26.875 Read Recovery Levels: Not Supported 00:23:26.875 Endurance Groups: Not Supported 00:23:26.875 Predictable Latency Mode: Not Supported 00:23:26.875 Traffic Based Keep ALive: Not Supported 00:23:26.875 Namespace Granularity: Not Supported 00:23:26.875 SQ Associations: Not Supported 00:23:26.875 UUID List: Not Supported 00:23:26.875 Multi-Domain Subsystem: Not Supported 00:23:26.875 Fixed Capacity Management: Not Supported 00:23:26.875 Variable Capacity Management: Not Supported 00:23:26.875 Delete Endurance Group: Not Supported 00:23:26.875 Delete NVM Set: Not Supported 00:23:26.875 Extended LBA Formats Supported: Not Supported 00:23:26.875 Flexible Data Placement Supported: Not Supported 00:23:26.875 00:23:26.875 Controller Memory Buffer Support 00:23:26.875 ================================ 00:23:26.875 Supported: No 00:23:26.875 00:23:26.875 Persistent Memory Region Support 00:23:26.875 ================================ 00:23:26.875 Supported: No 00:23:26.875 00:23:26.875 Admin Command Set Attributes 00:23:26.875 ============================ 00:23:26.875 Security Send/Receive: Not Supported 00:23:26.875 Format NVM: Not Supported 00:23:26.875 Firmware Activate/Download: Not Supported 00:23:26.875 Namespace Management: Not Supported 00:23:26.875 Device Self-Test: Not Supported 00:23:26.875 Directives: Not Supported 00:23:26.875 NVMe-MI: Not Supported 00:23:26.875 Virtualization Management: Not Supported 00:23:26.875 Doorbell Buffer Config: Not Supported 00:23:26.875 Get LBA Status Capability: Not Supported 00:23:26.875 Command & Feature Lockdown Capability: Not Supported 00:23:26.875 Abort Command Limit: 4 00:23:26.875 Async Event Request Limit: 4 00:23:26.875 Number of Firmware Slots: N/A 00:23:26.875 Firmware Slot 1 Read-Only: N/A 00:23:26.875 Firmware Activation Without Reset: N/A 00:23:26.875 Multiple Update Detection Support: N/A 00:23:26.875 Firmware Update Granularity: No Information Provided 00:23:26.875 Per-Namespace SMART Log: No 00:23:26.875 Asymmetric Namespace Access Log Page: Not Supported 00:23:26.875 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:26.875 Command Effects Log Page: Supported 00:23:26.875 Get Log Page Extended Data: Supported 00:23:26.875 Telemetry Log Pages: Not Supported 00:23:26.875 Persistent Event Log Pages: Not Supported 00:23:26.875 Supported Log Pages Log Page: May Support 00:23:26.875 Commands Supported & Effects Log Page: Not Supported 00:23:26.875 Feature Identifiers & Effects Log Page:May Support 00:23:26.875 NVMe-MI Commands & Effects Log Page: May Support 00:23:26.875 Data Area 4 for Telemetry Log: Not Supported 00:23:26.875 Error Log Page Entries Supported: 128 00:23:26.875 Keep Alive: Supported 00:23:26.875 Keep Alive Granularity: 10000 ms 00:23:26.875 00:23:26.875 NVM Command Set Attributes 00:23:26.875 ========================== 00:23:26.875 Submission Queue Entry Size 00:23:26.875 Max: 64 00:23:26.875 Min: 64 00:23:26.875 Completion Queue Entry Size 00:23:26.875 Max: 16 00:23:26.875 Min: 16 00:23:26.875 Number of Namespaces: 32 00:23:26.875 Compare Command: Supported 00:23:26.875 Write Uncorrectable Command: Not Supported 00:23:26.875 Dataset Management Command: Supported 00:23:26.875 Write Zeroes Command: Supported 00:23:26.875 Set Features Save Field: Not Supported 00:23:26.875 Reservations: Supported 00:23:26.875 Timestamp: Not Supported 00:23:26.875 Copy: Supported 00:23:26.875 Volatile Write Cache: Present 00:23:26.875 Atomic Write Unit (Normal): 1 00:23:26.875 Atomic Write Unit (PFail): 1 00:23:26.875 Atomic Compare & Write Unit: 1 00:23:26.875 Fused Compare & Write: Supported 00:23:26.875 Scatter-Gather List 00:23:26.875 SGL Command Set: Supported 00:23:26.875 SGL Keyed: Supported 00:23:26.875 SGL Bit Bucket Descriptor: Not Supported 00:23:26.875 SGL Metadata Pointer: Not Supported 00:23:26.875 Oversized SGL: Not Supported 00:23:26.875 SGL Metadata Address: Not Supported 00:23:26.875 SGL Offset: Supported 00:23:26.875 Transport SGL Data Block: Not Supported 00:23:26.875 Replay Protected Memory Block: Not Supported 00:23:26.875 00:23:26.875 Firmware Slot Information 00:23:26.875 ========================= 00:23:26.875 Active slot: 1 00:23:26.875 Slot 1 Firmware Revision: 25.01 00:23:26.875 00:23:26.875 00:23:26.875 Commands Supported and Effects 00:23:26.875 ============================== 00:23:26.875 Admin Commands 00:23:26.875 -------------- 00:23:26.875 Get Log Page (02h): Supported 00:23:26.875 Identify (06h): Supported 00:23:26.875 Abort (08h): Supported 00:23:26.875 Set Features (09h): Supported 00:23:26.875 Get Features (0Ah): Supported 00:23:26.875 Asynchronous Event Request (0Ch): Supported 00:23:26.875 Keep Alive (18h): Supported 00:23:26.875 I/O Commands 00:23:26.876 ------------ 00:23:26.876 Flush (00h): Supported LBA-Change 00:23:26.876 Write (01h): Supported LBA-Change 00:23:26.876 Read (02h): Supported 00:23:26.876 Compare (05h): Supported 00:23:26.876 Write Zeroes (08h): Supported LBA-Change 00:23:26.876 Dataset Management (09h): Supported LBA-Change 00:23:26.876 Copy (19h): Supported LBA-Change 00:23:26.876 00:23:26.876 Error Log 00:23:26.876 ========= 00:23:26.876 00:23:26.876 Arbitration 00:23:26.876 =========== 00:23:26.876 Arbitration Burst: 1 00:23:26.876 00:23:26.876 Power Management 00:23:26.876 ================ 00:23:26.876 Number of Power States: 1 00:23:26.876 Current Power State: Power State #0 00:23:26.876 Power State #0: 00:23:26.876 Max Power: 0.00 W 00:23:26.876 Non-Operational State: Operational 00:23:26.876 Entry Latency: Not Reported 00:23:26.876 Exit Latency: Not Reported 00:23:26.876 Relative Read Throughput: 0 00:23:26.876 Relative Read Latency: 0 00:23:26.876 Relative Write Throughput: 0 00:23:26.876 Relative Write Latency: 0 00:23:26.876 Idle Power: Not Reported 00:23:26.876 Active Power: Not Reported 00:23:26.876 Non-Operational Permissive Mode: Not Supported 00:23:26.876 00:23:26.876 Health Information 00:23:26.876 ================== 00:23:26.876 Critical Warnings: 00:23:26.876 Available Spare Space: OK 00:23:26.876 Temperature: OK 00:23:26.876 Device Reliability: OK 00:23:26.876 Read Only: No 00:23:26.876 Volatile Memory Backup: OK 00:23:26.876 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:26.876 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:26.876 Available Spare: 0% 00:23:26.876 Available Spare Threshold: 0% 00:23:26.876 Life Percentage Used:[2024-12-05 14:13:33.113840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.113845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df9690) 00:23:26.876 [2024-12-05 14:13:33.113852] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.876 [2024-12-05 14:13:33.113864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5bb80, cid 7, qid 0 00:23:26.876 [2024-12-05 14:13:33.114041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.876 [2024-12-05 14:13:33.114047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.876 [2024-12-05 14:13:33.114051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5bb80) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114090] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:26.876 [2024-12-05 14:13:33.114100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b100) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-12-05 14:13:33.114112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b280) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-12-05 14:13:33.114122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b400) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-12-05 14:13:33.114131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.876 [2024-12-05 14:13:33.114145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.876 [2024-12-05 14:13:33.114160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.876 [2024-12-05 14:13:33.114172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.876 [2024-12-05 14:13:33.114368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.876 [2024-12-05 14:13:33.114374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.876 [2024-12-05 14:13:33.114378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.876 [2024-12-05 14:13:33.114407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.876 [2024-12-05 14:13:33.114421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.876 [2024-12-05 14:13:33.114611] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.876 [2024-12-05 14:13:33.114617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.876 [2024-12-05 14:13:33.114621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114630] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:26.876 [2024-12-05 14:13:33.114635] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:26.876 [2024-12-05 14:13:33.114645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.876 [2024-12-05 14:13:33.114659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.876 [2024-12-05 14:13:33.114670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.876 [2024-12-05 14:13:33.114837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.876 [2024-12-05 14:13:33.114844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.876 [2024-12-05 14:13:33.114847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.114862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.114869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.876 [2024-12-05 14:13:33.114876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.876 [2024-12-05 14:13:33.114887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.876 [2024-12-05 14:13:33.115068] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.876 [2024-12-05 14:13:33.115075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.876 [2024-12-05 14:13:33.115078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.115082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.876 [2024-12-05 14:13:33.115093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.115097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.115100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.876 [2024-12-05 14:13:33.115107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.876 [2024-12-05 14:13:33.115117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.876 [2024-12-05 14:13:33.115299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.876 [2024-12-05 14:13:33.115306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.876 [2024-12-05 14:13:33.115311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.876 [2024-12-05 14:13:33.115315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.877 [2024-12-05 14:13:33.115326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:26.877 [2024-12-05 14:13:33.115330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:26.877 [2024-12-05 14:13:33.115333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df9690) 00:23:26.877 [2024-12-05 14:13:33.115340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.877 [2024-12-05 14:13:33.115351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5b580, cid 3, qid 0 00:23:26.877 [2024-12-05 14:13:33.119465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:26.877 [2024-12-05 14:13:33.119473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:26.877 [2024-12-05 14:13:33.119477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:26.877 [2024-12-05 14:13:33.119481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5b580) on tqpair=0x1df9690 00:23:26.877 [2024-12-05 14:13:33.119489] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:26.877 0% 00:23:26.877 Data Units Read: 0 00:23:26.877 Data Units Written: 0 00:23:26.877 Host Read Commands: 0 00:23:26.877 Host Write Commands: 0 00:23:26.877 Controller Busy Time: 0 minutes 00:23:26.877 Power Cycles: 0 00:23:26.877 Power On Hours: 0 hours 00:23:26.877 Unsafe Shutdowns: 0 00:23:26.877 Unrecoverable Media Errors: 0 00:23:26.877 Lifetime Error Log Entries: 0 00:23:26.877 Warning Temperature Time: 0 minutes 00:23:26.877 Critical Temperature Time: 0 minutes 00:23:26.877 00:23:26.877 Number of Queues 00:23:26.877 ================ 00:23:26.877 Number of I/O Submission Queues: 127 00:23:26.877 Number of I/O Completion Queues: 127 00:23:26.877 00:23:26.877 Active Namespaces 00:23:26.877 ================= 00:23:26.877 Namespace ID:1 00:23:26.877 Error Recovery Timeout: Unlimited 00:23:26.877 Command Set Identifier: NVM (00h) 00:23:26.877 Deallocate: Supported 00:23:26.877 Deallocated/Unwritten Error: Not Supported 00:23:26.877 Deallocated Read Value: Unknown 00:23:26.877 Deallocate in Write Zeroes: Not Supported 00:23:26.877 Deallocated Guard Field: 0xFFFF 00:23:26.877 Flush: Supported 00:23:26.877 Reservation: Supported 00:23:26.877 Namespace Sharing Capabilities: Multiple Controllers 00:23:26.877 Size (in LBAs): 131072 (0GiB) 00:23:26.877 Capacity (in LBAs): 131072 (0GiB) 00:23:26.877 Utilization (in LBAs): 131072 (0GiB) 00:23:26.877 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:26.877 EUI64: ABCDEF0123456789 00:23:26.877 UUID: 4a0f5028-72e3-4ffd-ba8e-891c5ebcaf9d 00:23:26.877 Thin Provisioning: Not Supported 00:23:26.877 Per-NS Atomic Units: Yes 00:23:26.877 Atomic Boundary Size (Normal): 0 00:23:26.877 Atomic Boundary Size (PFail): 0 00:23:26.877 Atomic Boundary Offset: 0 00:23:26.877 Maximum Single Source Range Length: 65535 00:23:26.877 Maximum Copy Length: 65535 00:23:26.877 Maximum Source Range Count: 1 00:23:26.877 NGUID/EUI64 Never Reused: No 00:23:26.877 Namespace Write Protected: No 00:23:26.877 Number of LBA Formats: 1 00:23:26.877 Current LBA Format: LBA Format #00 00:23:26.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:26.877 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.877 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:27.139 rmmod nvme_tcp 00:23:27.139 rmmod nvme_fabrics 00:23:27.139 rmmod nvme_keyring 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2820330 ']' 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2820330 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2820330 ']' 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2820330 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2820330 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2820330' 00:23:27.139 killing process with pid 2820330 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2820330 00:23:27.139 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2820330 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.401 14:13:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.315 00:23:29.315 real 0m11.705s 00:23:29.315 user 0m8.646s 00:23:29.315 sys 0m6.199s 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.315 ************************************ 00:23:29.315 END TEST nvmf_identify 00:23:29.315 ************************************ 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.315 14:13:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.576 ************************************ 00:23:29.576 START TEST nvmf_perf 00:23:29.576 ************************************ 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:29.576 * Looking for test storage... 00:23:29.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.576 --rc genhtml_branch_coverage=1 00:23:29.576 --rc genhtml_function_coverage=1 00:23:29.576 --rc genhtml_legend=1 00:23:29.576 --rc geninfo_all_blocks=1 00:23:29.576 --rc geninfo_unexecuted_blocks=1 00:23:29.576 00:23:29.576 ' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.576 --rc genhtml_branch_coverage=1 00:23:29.576 --rc genhtml_function_coverage=1 00:23:29.576 --rc genhtml_legend=1 00:23:29.576 --rc geninfo_all_blocks=1 00:23:29.576 --rc geninfo_unexecuted_blocks=1 00:23:29.576 00:23:29.576 ' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.576 --rc genhtml_branch_coverage=1 00:23:29.576 --rc genhtml_function_coverage=1 00:23:29.576 --rc genhtml_legend=1 00:23:29.576 --rc geninfo_all_blocks=1 00:23:29.576 --rc geninfo_unexecuted_blocks=1 00:23:29.576 00:23:29.576 ' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:29.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.576 --rc genhtml_branch_coverage=1 00:23:29.576 --rc genhtml_function_coverage=1 00:23:29.576 --rc genhtml_legend=1 00:23:29.576 --rc geninfo_all_blocks=1 00:23:29.576 --rc geninfo_unexecuted_blocks=1 00:23:29.576 00:23:29.576 ' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.576 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.837 14:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:37.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:37.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:37.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.974 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:37.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:23:37.975 00:23:37.975 --- 10.0.0.2 ping statistics --- 00:23:37.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.975 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:37.975 00:23:37.975 --- 10.0.0.1 ping statistics --- 00:23:37.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.975 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2824970 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2824970 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2824970 ']' 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.975 14:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.975 [2024-12-05 14:13:43.467641] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:23:37.975 [2024-12-05 14:13:43.467707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.975 [2024-12-05 14:13:43.568958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.975 [2024-12-05 14:13:43.622603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.975 [2024-12-05 14:13:43.622658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.975 [2024-12-05 14:13:43.622667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.975 [2024-12-05 14:13:43.622675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.975 [2024-12-05 14:13:43.622681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.975 [2024-12-05 14:13:43.625034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.975 [2024-12-05 14:13:43.625193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.975 [2024-12-05 14:13:43.625359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.975 [2024-12-05 14:13:43.625359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:38.237 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:38.809 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:38.810 14:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:38.810 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:38.810 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:39.070 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:39.070 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:39.070 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:39.070 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:39.070 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:39.329 [2024-12-05 14:13:45.455156] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.329 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.589 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:39.589 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.849 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:39.849 14:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:39.849 14:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.108 [2024-12-05 14:13:46.230409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.108 14:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:40.368 14:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:40.368 14:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:40.368 14:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:40.368 14:13:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:41.748 Initializing NVMe Controllers 00:23:41.748 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:41.748 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:41.748 Initialization complete. Launching workers. 00:23:41.748 ======================================================== 00:23:41.748 Latency(us) 00:23:41.748 Device Information : IOPS MiB/s Average min max 00:23:41.748 PCIE (0000:65:00.0) NSID 1 from core 0: 78749.27 307.61 405.66 13.28 9005.70 00:23:41.748 ======================================================== 00:23:41.748 Total : 78749.27 307.61 405.66 13.28 9005.70 00:23:41.748 00:23:41.748 14:13:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.131 Initializing NVMe Controllers 00:23:43.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:43.131 Initialization complete. Launching workers. 00:23:43.131 ======================================================== 00:23:43.131 Latency(us) 00:23:43.131 Device Information : IOPS MiB/s Average min max 00:23:43.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 106.00 0.41 9809.57 97.84 45918.22 00:23:43.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 62.00 0.24 16347.95 7964.43 47894.00 00:23:43.131 ======================================================== 00:23:43.131 Total : 168.00 0.66 12222.54 97.84 47894.00 00:23:43.131 00:23:43.131 14:13:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.515 Initializing NVMe Controllers 00:23:44.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.516 Initialization complete. Launching workers. 00:23:44.516 ======================================================== 00:23:44.516 Latency(us) 00:23:44.516 Device Information : IOPS MiB/s Average min max 00:23:44.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12204.89 47.68 2628.45 335.07 6325.03 00:23:44.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3790.97 14.81 8479.96 4726.86 18402.65 00:23:44.516 ======================================================== 00:23:44.516 Total : 15995.86 62.48 4015.24 335.07 18402.65 00:23:44.516 00:23:44.516 14:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:44.516 14:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:44.516 14:13:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:47.061 Initializing NVMe Controllers 00:23:47.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.061 Controller IO queue size 128, less than required. 00:23:47.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.061 Controller IO queue size 128, less than required. 00:23:47.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:47.061 Initialization complete. Launching workers. 00:23:47.061 ======================================================== 00:23:47.061 Latency(us) 00:23:47.061 Device Information : IOPS MiB/s Average min max 00:23:47.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2291.96 572.99 56439.69 31753.74 96101.17 00:23:47.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 592.58 148.15 226710.67 80118.41 370509.39 00:23:47.061 ======================================================== 00:23:47.061 Total : 2884.54 721.14 91419.20 31753.74 370509.39 00:23:47.061 00:23:47.061 14:13:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:47.061 No valid NVMe controllers or AIO or URING devices found 00:23:47.061 Initializing NVMe Controllers 00:23:47.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.061 Controller IO queue size 128, less than required. 00:23:47.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.061 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:47.061 Controller IO queue size 128, less than required. 00:23:47.061 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:47.061 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:47.061 WARNING: Some requested NVMe devices were skipped 00:23:47.061 14:13:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:49.602 Initializing NVMe Controllers 00:23:49.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.602 Controller IO queue size 128, less than required. 00:23:49.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.602 Controller IO queue size 128, less than required. 00:23:49.602 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.602 Initialization complete. Launching workers. 00:23:49.602 00:23:49.602 ==================== 00:23:49.602 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:49.602 TCP transport: 00:23:49.602 polls: 32638 00:23:49.602 idle_polls: 18165 00:23:49.602 sock_completions: 14473 00:23:49.602 nvme_completions: 6661 00:23:49.602 submitted_requests: 9946 00:23:49.602 queued_requests: 1 00:23:49.602 00:23:49.602 ==================== 00:23:49.602 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:49.602 TCP transport: 00:23:49.602 polls: 36446 00:23:49.602 idle_polls: 22275 00:23:49.602 sock_completions: 14171 00:23:49.602 nvme_completions: 8249 00:23:49.602 submitted_requests: 12404 00:23:49.602 queued_requests: 1 00:23:49.602 ======================================================== 00:23:49.602 Latency(us) 00:23:49.602 Device Information : IOPS MiB/s Average min max 00:23:49.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1662.14 415.53 78050.98 34426.89 135988.69 00:23:49.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2058.46 514.61 62706.96 30431.17 104728.43 00:23:49.602 ======================================================== 00:23:49.602 Total : 3720.59 930.15 69561.75 30431.17 135988.69 00:23:49.602 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.602 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:49.603 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.603 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:49.603 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.603 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.603 rmmod nvme_tcp 00:23:49.603 rmmod nvme_fabrics 00:23:49.603 rmmod nvme_keyring 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2824970 ']' 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2824970 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2824970 ']' 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2824970 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824970 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824970' 00:23:49.862 killing process with pid 2824970 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2824970 00:23:49.862 14:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2824970 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.770 14:13:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.364 14:14:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.364 00:23:54.364 real 0m24.371s 00:23:54.364 user 0m58.652s 00:23:54.364 sys 0m8.763s 00:23:54.364 14:14:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.364 14:14:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:54.364 ************************************ 00:23:54.364 END TEST nvmf_perf 00:23:54.364 ************************************ 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.365 ************************************ 00:23:54.365 START TEST nvmf_fio_host 00:23:54.365 ************************************ 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.365 * Looking for test storage... 00:23:54.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.365 --rc genhtml_branch_coverage=1 00:23:54.365 --rc genhtml_function_coverage=1 00:23:54.365 --rc genhtml_legend=1 00:23:54.365 --rc geninfo_all_blocks=1 00:23:54.365 --rc geninfo_unexecuted_blocks=1 00:23:54.365 00:23:54.365 ' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.365 --rc genhtml_branch_coverage=1 00:23:54.365 --rc genhtml_function_coverage=1 00:23:54.365 --rc genhtml_legend=1 00:23:54.365 --rc geninfo_all_blocks=1 00:23:54.365 --rc geninfo_unexecuted_blocks=1 00:23:54.365 00:23:54.365 ' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.365 --rc genhtml_branch_coverage=1 00:23:54.365 --rc genhtml_function_coverage=1 00:23:54.365 --rc genhtml_legend=1 00:23:54.365 --rc geninfo_all_blocks=1 00:23:54.365 --rc geninfo_unexecuted_blocks=1 00:23:54.365 00:23:54.365 ' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:54.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:54.365 --rc genhtml_branch_coverage=1 00:23:54.365 --rc genhtml_function_coverage=1 00:23:54.365 --rc genhtml_legend=1 00:23:54.365 --rc geninfo_all_blocks=1 00:23:54.365 --rc geninfo_unexecuted_blocks=1 00:23:54.365 00:23:54.365 ' 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.365 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:54.366 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:54.367 14:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:02.509 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:02.509 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:02.509 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:02.509 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:24:02.509 00:24:02.509 --- 10.0.0.2 ping statistics --- 00:24:02.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.509 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:24:02.509 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:24:02.509 00:24:02.509 --- 10.0.0.1 ping statistics --- 00:24:02.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.510 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2831931 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2831931 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2831931 ']' 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.510 14:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.510 [2024-12-05 14:14:07.865443] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:24:02.510 [2024-12-05 14:14:07.865524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.510 [2024-12-05 14:14:07.967616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.510 [2024-12-05 14:14:08.021318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.510 [2024-12-05 14:14:08.021370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.510 [2024-12-05 14:14:08.021379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.510 [2024-12-05 14:14:08.021387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.510 [2024-12-05 14:14:08.021393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.510 [2024-12-05 14:14:08.023469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.510 [2024-12-05 14:14:08.023510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.510 [2024-12-05 14:14:08.023608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.510 [2024-12-05 14:14:08.023610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.510 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.510 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:02.510 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.771 [2024-12-05 14:14:08.851888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.771 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:02.771 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.771 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.771 14:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:03.032 Malloc1 00:24:03.032 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.294 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:03.294 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.555 [2024-12-05 14:14:09.731476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.555 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:03.816 14:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.816 14:14:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.817 14:14:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.817 14:14:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:03.817 14:14:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:04.078 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:04.078 fio-3.35 00:24:04.078 Starting 1 thread 00:24:06.624 [2024-12-05 14:14:12.636995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed5ca0 is same with the state(6) to be set 00:24:06.624 [2024-12-05 14:14:12.637044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed5ca0 is same with the state(6) to be set 00:24:06.624 00:24:06.624 test: (groupid=0, jobs=1): err= 0: pid=2832568: Thu Dec 5 14:14:12 2024 00:24:06.624 read: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(93.7MiB/2004msec) 00:24:06.624 slat (usec): min=2, max=286, avg= 2.18, stdev= 2.60 00:24:06.624 clat (usec): min=3688, max=9132, avg=5889.35, stdev=1207.02 00:24:06.624 lat (usec): min=3694, max=9134, avg=5891.53, stdev=1207.05 00:24:06.624 clat percentiles (usec): 00:24:06.624 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4948], 00:24:06.624 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5473], 00:24:06.624 | 70.00th=[ 6849], 80.00th=[ 7373], 90.00th=[ 7767], 95.00th=[ 8029], 00:24:06.624 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 8848], 00:24:06.624 | 99.99th=[ 9110] 00:24:06.624 bw ( KiB/s): min=35960, max=55928, per=99.90%, avg=47854.00, stdev=9725.60, samples=4 00:24:06.624 iops : min= 8990, max=13982, avg=11963.50, stdev=2431.40, samples=4 00:24:06.624 write: IOPS=11.9k, BW=46.6MiB/s (48.8MB/s)(93.3MiB/2004msec); 0 zone resets 00:24:06.624 slat (usec): min=2, max=272, avg= 2.25, stdev= 1.95 00:24:06.624 clat (usec): min=2937, max=7634, avg=4743.06, stdev=958.99 00:24:06.624 lat (usec): min=2955, max=7636, avg=4745.31, stdev=959.08 00:24:06.624 clat percentiles (usec): 00:24:06.624 | 1.00th=[ 3523], 5.00th=[ 3720], 10.00th=[ 3851], 20.00th=[ 3982], 00:24:06.624 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4424], 00:24:06.624 | 70.00th=[ 5473], 80.00th=[ 5932], 90.00th=[ 6259], 95.00th=[ 6456], 00:24:06.624 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7242], 99.95th=[ 7242], 00:24:06.624 | 99.99th=[ 7570] 00:24:06.624 bw ( KiB/s): min=36864, max=55752, per=99.97%, avg=47674.00, stdev=9312.44, samples=4 00:24:06.624 iops : min= 9216, max=13938, avg=11918.50, stdev=2328.11, samples=4 00:24:06.624 lat (msec) : 4=11.38%, 10=88.62% 00:24:06.624 cpu : usr=71.14%, sys=27.61%, ctx=34, majf=0, minf=16 00:24:06.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:06.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:06.624 issued rwts: total=23999,23892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:06.624 00:24:06.624 Run status group 0 (all jobs): 00:24:06.624 READ: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=93.7MiB (98.3MB), run=2004-2004msec 00:24:06.624 WRITE: bw=46.6MiB/s (48.8MB/s), 46.6MiB/s-46.6MiB/s (48.8MB/s-48.8MB/s), io=93.3MiB (97.9MB), run=2004-2004msec 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:06.624 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:06.625 14:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:06.885 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:06.885 fio-3.35 00:24:06.885 Starting 1 thread 00:24:09.432 00:24:09.432 test: (groupid=0, jobs=1): err= 0: pid=2833351: Thu Dec 5 14:14:15 2024 00:24:09.432 read: IOPS=9713, BW=152MiB/s (159MB/s)(304MiB/2006msec) 00:24:09.432 slat (usec): min=3, max=113, avg= 3.59, stdev= 1.56 00:24:09.432 clat (usec): min=1852, max=16573, avg=7922.63, stdev=1964.07 00:24:09.432 lat (usec): min=1856, max=16577, avg=7926.23, stdev=1964.18 00:24:09.432 clat percentiles (usec): 00:24:09.432 | 1.00th=[ 3851], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6194], 00:24:09.432 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8455], 00:24:09.432 | 70.00th=[ 9110], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[10945], 00:24:09.432 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13566], 99.95th=[13698], 00:24:09.432 | 99.99th=[14091] 00:24:09.432 bw ( KiB/s): min=71552, max=87456, per=49.51%, avg=76952.00, stdev=7475.88, samples=4 00:24:09.432 iops : min= 4472, max= 5466, avg=4809.50, stdev=467.24, samples=4 00:24:09.432 write: IOPS=5857, BW=91.5MiB/s (96.0MB/s)(157MiB/1719msec); 0 zone resets 00:24:09.432 slat (usec): min=39, max=328, avg=40.80, stdev= 6.49 00:24:09.432 clat (usec): min=4873, max=14130, avg=8991.02, stdev=1284.78 00:24:09.432 lat (usec): min=4913, max=14170, avg=9031.81, stdev=1285.86 00:24:09.432 clat percentiles (usec): 00:24:09.432 | 1.00th=[ 6456], 5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 7832], 00:24:09.432 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:24:09.432 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:24:09.432 | 99.00th=[12387], 99.50th=[12911], 99.90th=[13698], 99.95th=[13829], 00:24:09.432 | 99.99th=[14091] 00:24:09.432 bw ( KiB/s): min=74368, max=91136, per=85.60%, avg=80224.00, stdev=7904.71, samples=4 00:24:09.432 iops : min= 4648, max= 5696, avg=5014.00, stdev=494.04, samples=4 00:24:09.432 lat (msec) : 2=0.01%, 4=0.86%, 10=79.97%, 20=19.16% 00:24:09.432 cpu : usr=86.43%, sys=12.72%, ctx=14, majf=0, minf=38 00:24:09.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:09.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:09.432 issued rwts: total=19486,10069,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:09.432 00:24:09.432 Run status group 0 (all jobs): 00:24:09.432 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=304MiB (319MB), run=2006-2006msec 00:24:09.432 WRITE: bw=91.5MiB/s (96.0MB/s), 91.5MiB/s-91.5MiB/s (96.0MB/s-96.0MB/s), io=157MiB (165MB), run=1719-1719msec 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:09.432 rmmod nvme_tcp 00:24:09.432 rmmod nvme_fabrics 00:24:09.432 rmmod nvme_keyring 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2831931 ']' 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2831931 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2831931 ']' 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2831931 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2831931 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2831931' 00:24:09.432 killing process with pid 2831931 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2831931 00:24:09.432 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2831931 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.692 14:14:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:12.240 00:24:12.240 real 0m17.835s 00:24:12.240 user 1m0.250s 00:24:12.240 sys 0m7.773s 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.240 ************************************ 00:24:12.240 END TEST nvmf_fio_host 00:24:12.240 ************************************ 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:12.240 14:14:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.240 ************************************ 00:24:12.240 START TEST nvmf_failover 00:24:12.240 ************************************ 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:12.240 * Looking for test storage... 00:24:12.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:12.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.240 --rc genhtml_branch_coverage=1 00:24:12.240 --rc genhtml_function_coverage=1 00:24:12.240 --rc genhtml_legend=1 00:24:12.240 --rc geninfo_all_blocks=1 00:24:12.240 --rc geninfo_unexecuted_blocks=1 00:24:12.240 00:24:12.240 ' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:12.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.240 --rc genhtml_branch_coverage=1 00:24:12.240 --rc genhtml_function_coverage=1 00:24:12.240 --rc genhtml_legend=1 00:24:12.240 --rc geninfo_all_blocks=1 00:24:12.240 --rc geninfo_unexecuted_blocks=1 00:24:12.240 00:24:12.240 ' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:12.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.240 --rc genhtml_branch_coverage=1 00:24:12.240 --rc genhtml_function_coverage=1 00:24:12.240 --rc genhtml_legend=1 00:24:12.240 --rc geninfo_all_blocks=1 00:24:12.240 --rc geninfo_unexecuted_blocks=1 00:24:12.240 00:24:12.240 ' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:12.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:12.240 --rc genhtml_branch_coverage=1 00:24:12.240 --rc genhtml_function_coverage=1 00:24:12.240 --rc genhtml_legend=1 00:24:12.240 --rc geninfo_all_blocks=1 00:24:12.240 --rc geninfo_unexecuted_blocks=1 00:24:12.240 00:24:12.240 ' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:12.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:12.240 14:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:20.380 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.380 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:20.381 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:20.381 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:20.381 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.723 ms 00:24:20.381 00:24:20.381 --- 10.0.0.2 ping statistics --- 00:24:20.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.381 rtt min/avg/max/mdev = 0.723/0.723/0.723/0.000 ms 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:24:20.381 00:24:20.381 --- 10.0.0.1 ping statistics --- 00:24:20.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.381 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2837953 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2837953 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2837953 ']' 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.381 14:14:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.381 [2024-12-05 14:14:25.806328] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:24:20.381 [2024-12-05 14:14:25.806399] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.381 [2024-12-05 14:14:25.906966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:20.381 [2024-12-05 14:14:25.958961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.381 [2024-12-05 14:14:25.959014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.381 [2024-12-05 14:14:25.959024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.381 [2024-12-05 14:14:25.959031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.381 [2024-12-05 14:14:25.959037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.381 [2024-12-05 14:14:25.961122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.381 [2024-12-05 14:14:25.961284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.381 [2024-12-05 14:14:25.961286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.381 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.381 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:20.381 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.381 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.381 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.640 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.640 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:20.640 [2024-12-05 14:14:26.842866] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.640 14:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:20.902 Malloc0 00:24:20.902 14:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:21.162 14:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.422 14:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.422 [2024-12-05 14:14:27.657335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.422 14:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:21.683 [2024-12-05 14:14:27.853819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:21.683 14:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:21.944 [2024-12-05 14:14:28.054492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2838423 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2838423 /var/tmp/bdevperf.sock 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2838423 ']' 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.944 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:22.915 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.915 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:22.915 14:14:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:22.915 NVMe0n1 00:24:23.176 14:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:23.437 00:24:23.437 14:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2838756 00:24:23.437 14:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.437 14:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:24.383 14:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.645 [2024-12-05 14:14:30.758842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.758999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.645 [2024-12-05 14:14:30.759112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 [2024-12-05 14:14:30.759262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916210 is same with the state(6) to be set 00:24:24.646 14:14:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:28.077 14:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:28.077 00:24:28.077 14:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:28.077 [2024-12-05 14:14:34.254385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.077 [2024-12-05 14:14:34.254422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.077 [2024-12-05 14:14:34.254428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 [2024-12-05 14:14:34.254809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x916cc0 is same with the state(6) to be set 00:24:28.078 14:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:31.389 14:14:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.389 [2024-12-05 14:14:37.440371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.389 14:14:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:32.329 14:14:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:32.590 [2024-12-05 14:14:38.631715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.590 [2024-12-05 14:14:38.631878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.631996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 [2024-12-05 14:14:38.632121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7dc480 is same with the state(6) to be set 00:24:32.591 14:14:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2838756 00:24:39.182 { 00:24:39.182 "results": [ 00:24:39.182 { 00:24:39.182 "job": "NVMe0n1", 00:24:39.182 "core_mask": "0x1", 00:24:39.182 "workload": "verify", 00:24:39.182 "status": "finished", 00:24:39.182 "verify_range": { 00:24:39.182 "start": 0, 00:24:39.182 "length": 16384 00:24:39.182 }, 00:24:39.182 "queue_depth": 128, 00:24:39.182 "io_size": 4096, 00:24:39.182 "runtime": 15.007262, 00:24:39.182 "iops": 12454.503692945455, 00:24:39.182 "mibps": 48.65040505056818, 00:24:39.182 "io_failed": 6829, 00:24:39.182 "io_timeout": 0, 00:24:39.182 "avg_latency_us": 9894.281265289197, 00:24:39.182 "min_latency_us": 532.48, 00:24:39.182 "max_latency_us": 12670.293333333333 00:24:39.182 } 00:24:39.182 ], 00:24:39.182 "core_count": 1 00:24:39.182 } 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2838423 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2838423 ']' 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2838423 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838423 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838423' 00:24:39.182 killing process with pid 2838423 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2838423 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2838423 00:24:39.182 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:39.182 [2024-12-05 14:14:28.130077] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:24:39.182 [2024-12-05 14:14:28.130136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838423 ] 00:24:39.182 [2024-12-05 14:14:28.217283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.182 [2024-12-05 14:14:28.253541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.182 Running I/O for 15 seconds... 00:24:39.182 10942.00 IOPS, 42.74 MiB/s [2024-12-05T13:14:45.482Z] [2024-12-05 14:14:30.760078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.182 [2024-12-05 14:14:30.760112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.182 [2024-12-05 14:14:30.760128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.183 [2024-12-05 14:14:30.760766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.183 [2024-12-05 14:14:30.760774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.760988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.760995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.184 [2024-12-05 14:14:30.761428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.184 [2024-12-05 14:14:30.761435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.185 [2024-12-05 14:14:30.761452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.185 [2024-12-05 14:14:30.761472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.185 [2024-12-05 14:14:30.761489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.185 [2024-12-05 14:14:30.761505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.761991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.761999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.762008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.762017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.762027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.762034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.762044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.762051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.762060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.762067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.762076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.185 [2024-12-05 14:14:30.762084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.185 [2024-12-05 14:14:30.762093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.186 [2024-12-05 14:14:30.762266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.186 [2024-12-05 14:14:30.762300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.186 [2024-12-05 14:14:30.762307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:24:39.186 [2024-12-05 14:14:30.762314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762355] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:39.186 [2024-12-05 14:14:30.762376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.186 [2024-12-05 14:14:30.762385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.186 [2024-12-05 14:14:30.762401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.186 [2024-12-05 14:14:30.762417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.186 [2024-12-05 14:14:30.762433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:30.762440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:39.186 [2024-12-05 14:14:30.762483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1ada0 (9): Bad file descriptor 00:24:39.186 [2024-12-05 14:14:30.766075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:39.186 [2024-12-05 14:14:30.796035] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:39.186 10889.00 IOPS, 42.54 MiB/s [2024-12-05T13:14:45.486Z] 11043.00 IOPS, 43.14 MiB/s [2024-12-05T13:14:45.486Z] 11449.00 IOPS, 44.72 MiB/s [2024-12-05T13:14:45.486Z] [2024-12-05 14:14:34.255029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.186 [2024-12-05 14:14:34.255288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.186 [2024-12-05 14:14:34.255295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.187 [2024-12-05 14:14:34.255701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.187 [2024-12-05 14:14:34.255708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.255987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.255993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.256000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.256011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.188 [2024-12-05 14:14:34.256024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.188 [2024-12-05 14:14:34.256179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.188 [2024-12-05 14:14:34.256185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.189 [2024-12-05 14:14:34.256588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.189 [2024-12-05 14:14:34.256613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.189 [2024-12-05 14:14:34.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41304 len:8 PRP1 0x0 PRP2 0x0 00:24:39.189 [2024-12-05 14:14:34.256624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256658] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:39.189 [2024-12-05 14:14:34.256676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.189 [2024-12-05 14:14:34.256682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.189 [2024-12-05 14:14:34.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.189 [2024-12-05 14:14:34.256705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.189 [2024-12-05 14:14:34.256712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.190 [2024-12-05 14:14:34.256717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:34.256722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:39.190 [2024-12-05 14:14:34.259206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:39.190 [2024-12-05 14:14:34.259228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1ada0 (9): Bad file descriptor 00:24:39.190 [2024-12-05 14:14:34.286675] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:39.190 11648.40 IOPS, 45.50 MiB/s [2024-12-05T13:14:45.490Z] 11877.00 IOPS, 46.39 MiB/s [2024-12-05T13:14:45.490Z] 12059.43 IOPS, 47.11 MiB/s [2024-12-05T13:14:45.490Z] 12202.25 IOPS, 47.67 MiB/s [2024-12-05T13:14:45.490Z] [2024-12-05 14:14:38.632741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.632994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.632999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.190 [2024-12-05 14:14:38.633210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.190 [2024-12-05 14:14:38.633215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.191 [2024-12-05 14:14:38.633634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.191 [2024-12-05 14:14:38.633640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.192 [2024-12-05 14:14:38.633722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.633993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.633999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.192 [2024-12-05 14:14:38.634109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.192 [2024-12-05 14:14:38.634114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:39.193 [2024-12-05 14:14:38.634266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.193 [2024-12-05 14:14:38.634289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114800 len:8 PRP1 0x0 PRP2 0x0 00:24:39.193 [2024-12-05 14:14:38.634294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.193 [2024-12-05 14:14:38.634306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.193 [2024-12-05 14:14:38.634310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:24:39.193 [2024-12-05 14:14:38.634315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:39.193 [2024-12-05 14:14:38.634325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:39.193 [2024-12-05 14:14:38.634329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:24:39.193 [2024-12-05 14:14:38.634334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634368] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:39.193 [2024-12-05 14:14:38.634385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.193 [2024-12-05 14:14:38.634391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.193 [2024-12-05 14:14:38.634404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.193 [2024-12-05 14:14:38.634415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:39.193 [2024-12-05 14:14:38.634429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:39.193 [2024-12-05 14:14:38.634434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:39.193 [2024-12-05 14:14:38.636921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:39.193 [2024-12-05 14:14:38.636944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1ada0 (9): Bad file descriptor 00:24:39.193 [2024-12-05 14:14:38.703366] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:39.193 12170.44 IOPS, 47.54 MiB/s [2024-12-05T13:14:45.493Z] 12244.70 IOPS, 47.83 MiB/s [2024-12-05T13:14:45.493Z] 12306.64 IOPS, 48.07 MiB/s [2024-12-05T13:14:45.493Z] 12360.17 IOPS, 48.28 MiB/s [2024-12-05T13:14:45.493Z] 12393.31 IOPS, 48.41 MiB/s [2024-12-05T13:14:45.493Z] 12423.43 IOPS, 48.53 MiB/s [2024-12-05T13:14:45.493Z] 12452.53 IOPS, 48.64 MiB/s 00:24:39.193 Latency(us) 00:24:39.193 [2024-12-05T13:14:45.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.193 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:39.193 Verification LBA range: start 0x0 length 0x4000 00:24:39.193 NVMe0n1 : 15.01 12454.50 48.65 455.05 0.00 9894.28 532.48 12670.29 00:24:39.193 [2024-12-05T13:14:45.493Z] =================================================================================================================== 00:24:39.193 [2024-12-05T13:14:45.493Z] Total : 12454.50 48.65 455.05 0.00 9894.28 532.48 12670.29 00:24:39.193 Received shutdown signal, test time was about 15.000000 seconds 00:24:39.193 00:24:39.193 Latency(us) 00:24:39.193 [2024-12-05T13:14:45.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:39.193 [2024-12-05T13:14:45.493Z] =================================================================================================================== 00:24:39.193 [2024-12-05T13:14:45.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2841673 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2841673 /var/tmp/bdevperf.sock 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2841673 ']' 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.193 14:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:39.765 14:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.765 14:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:39.765 14:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:39.765 [2024-12-05 14:14:45.940827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.765 14:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.025 [2024-12-05 14:14:46.125255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:40.025 14:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.286 NVMe0n1 00:24:40.286 14:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.546 00:24:40.546 14:14:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:40.806 00:24:40.806 14:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.806 14:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:41.065 14:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.326 14:14:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:44.627 14:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.627 14:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:44.627 14:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2842798 00:24:44.627 14:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.627 14:14:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2842798 00:24:45.571 { 00:24:45.571 "results": [ 00:24:45.571 { 00:24:45.571 "job": "NVMe0n1", 00:24:45.571 "core_mask": "0x1", 00:24:45.571 "workload": "verify", 00:24:45.571 "status": "finished", 00:24:45.571 "verify_range": { 00:24:45.571 "start": 0, 00:24:45.571 "length": 16384 00:24:45.571 }, 00:24:45.571 "queue_depth": 128, 00:24:45.571 "io_size": 4096, 00:24:45.571 "runtime": 1.006057, 00:24:45.571 "iops": 12934.654795901226, 00:24:45.571 "mibps": 50.525995296489164, 00:24:45.571 "io_failed": 0, 00:24:45.571 "io_timeout": 0, 00:24:45.571 "avg_latency_us": 9858.427472015164, 00:24:45.571 "min_latency_us": 2020.6933333333334, 00:24:45.571 "max_latency_us": 10158.08 00:24:45.571 } 00:24:45.571 ], 00:24:45.571 "core_count": 1 00:24:45.571 } 00:24:45.571 14:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.571 [2024-12-05 14:14:44.985821] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:24:45.571 [2024-12-05 14:14:44.985882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2841673 ] 00:24:45.571 [2024-12-05 14:14:45.070864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.571 [2024-12-05 14:14:45.099566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.571 [2024-12-05 14:14:47.397902] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:45.571 [2024-12-05 14:14:47.397940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.571 [2024-12-05 14:14:47.397949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.571 [2024-12-05 14:14:47.397956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.571 [2024-12-05 14:14:47.397961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.571 [2024-12-05 14:14:47.397967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.571 [2024-12-05 14:14:47.397972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.571 [2024-12-05 14:14:47.397978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:45.571 [2024-12-05 14:14:47.397983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.571 [2024-12-05 14:14:47.397989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:45.571 [2024-12-05 14:14:47.398009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:45.571 [2024-12-05 14:14:47.398020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c3da0 (9): Bad file descriptor 00:24:45.571 [2024-12-05 14:14:47.540618] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:45.571 Running I/O for 1 seconds... 00:24:45.571 12885.00 IOPS, 50.33 MiB/s 00:24:45.571 Latency(us) 00:24:45.571 [2024-12-05T13:14:51.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.571 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:45.571 Verification LBA range: start 0x0 length 0x4000 00:24:45.571 NVMe0n1 : 1.01 12934.65 50.53 0.00 0.00 9858.43 2020.69 10158.08 00:24:45.571 [2024-12-05T13:14:51.871Z] =================================================================================================================== 00:24:45.571 [2024-12-05T13:14:51.871Z] Total : 12934.65 50.53 0.00 0.00 9858.43 2020.69 10158.08 00:24:45.571 14:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.571 14:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:45.831 14:14:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.831 14:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.831 14:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:46.091 14:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.351 14:14:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2841673 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2841673 ']' 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2841673 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841673 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841673' 00:24:49.692 killing process with pid 2841673 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2841673 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2841673 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:49.692 14:14:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.951 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:49.951 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:49.951 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:49.951 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.951 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.952 rmmod nvme_tcp 00:24:49.952 rmmod nvme_fabrics 00:24:49.952 rmmod nvme_keyring 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2837953 ']' 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2837953 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2837953 ']' 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2837953 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837953 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837953' 00:24:49.952 killing process with pid 2837953 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2837953 00:24:49.952 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2837953 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.211 14:14:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.123 00:24:52.123 real 0m40.342s 00:24:52.123 user 2m3.914s 00:24:52.123 sys 0m8.833s 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:52.123 ************************************ 00:24:52.123 END TEST nvmf_failover 00:24:52.123 ************************************ 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.123 14:14:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.386 ************************************ 00:24:52.386 START TEST nvmf_host_discovery 00:24:52.386 ************************************ 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:52.386 * Looking for test storage... 00:24:52.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.386 --rc genhtml_branch_coverage=1 00:24:52.386 --rc genhtml_function_coverage=1 00:24:52.386 --rc genhtml_legend=1 00:24:52.386 --rc geninfo_all_blocks=1 00:24:52.386 --rc geninfo_unexecuted_blocks=1 00:24:52.386 00:24:52.386 ' 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.386 --rc genhtml_branch_coverage=1 00:24:52.386 --rc genhtml_function_coverage=1 00:24:52.386 --rc genhtml_legend=1 00:24:52.386 --rc geninfo_all_blocks=1 00:24:52.386 --rc geninfo_unexecuted_blocks=1 00:24:52.386 00:24:52.386 ' 00:24:52.386 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.386 --rc genhtml_branch_coverage=1 00:24:52.387 --rc genhtml_function_coverage=1 00:24:52.387 --rc genhtml_legend=1 00:24:52.387 --rc geninfo_all_blocks=1 00:24:52.387 --rc geninfo_unexecuted_blocks=1 00:24:52.387 00:24:52.387 ' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:52.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.387 --rc genhtml_branch_coverage=1 00:24:52.387 --rc genhtml_function_coverage=1 00:24:52.387 --rc genhtml_legend=1 00:24:52.387 --rc geninfo_all_blocks=1 00:24:52.387 --rc geninfo_unexecuted_blocks=1 00:24:52.387 00:24:52.387 ' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.387 14:14:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:00.532 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:00.532 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:00.532 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:00.532 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.532 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.533 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.533 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.533 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.533 14:15:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:00.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:25:00.533 00:25:00.533 --- 10.0.0.2 ping statistics --- 00:25:00.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.533 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:25:00.533 00:25:00.533 --- 10.0.0.1 ping statistics --- 00:25:00.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.533 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2848141 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2848141 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2848141 ']' 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.533 14:15:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.533 [2024-12-05 14:15:06.303993] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:25:00.533 [2024-12-05 14:15:06.304063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.533 [2024-12-05 14:15:06.406389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.533 [2024-12-05 14:15:06.457757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.533 [2024-12-05 14:15:06.457810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.533 [2024-12-05 14:15:06.457819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.533 [2024-12-05 14:15:06.457826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.533 [2024-12-05 14:15:06.457832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.533 [2024-12-05 14:15:06.458603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 [2024-12-05 14:15:07.181983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 [2024-12-05 14:15:07.194302] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 null0 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 null1 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2848277 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2848277 /tmp/host.sock 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2848277 ']' 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:01.104 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.104 14:15:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.104 [2024-12-05 14:15:07.292847] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:25:01.104 [2024-12-05 14:15:07.292909] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848277 ] 00:25:01.104 [2024-12-05 14:15:07.382742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.364 [2024-12-05 14:15:07.436596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.935 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.935 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:01.935 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.935 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:01.935 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.935 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.936 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.196 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.196 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:02.196 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 [2024-12-05 14:15:08.461538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.197 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:02.457 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:02.458 14:15:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:03.028 [2024-12-05 14:15:09.181414] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:03.028 [2024-12-05 14:15:09.181435] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:03.028 [2024-12-05 14:15:09.181448] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.028 [2024-12-05 14:15:09.308852] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:03.288 [2024-12-05 14:15:09.410703] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:03.288 [2024-12-05 14:15:09.411526] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9ed670:1 started. 00:25:03.288 [2024-12-05 14:15:09.413142] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:03.288 [2024-12-05 14:15:09.413159] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:03.288 [2024-12-05 14:15:09.459092] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9ed670 was disconnected and freed. delete nvme_qpair. 00:25:03.548 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.548 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.548 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:03.549 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.809 14:15:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:03.809 [2024-12-05 14:15:10.100618] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9ed850:1 started. 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:04.069 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.070 [2024-12-05 14:15:10.151184] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9ed850 was disconnected and freed. delete nvme_qpair. 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:04.070 14:15:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.011 [2024-12-05 14:15:11.244669] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:05.011 [2024-12-05 14:15:11.245719] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:05.011 [2024-12-05 14:15:11.245744] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:05.011 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:05.273 [2024-12-05 14:15:11.373120] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:05.273 14:15:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:05.273 [2024-12-05 14:15:11.438230] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:05.273 [2024-12-05 14:15:11.438274] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:05.273 [2024-12-05 14:15:11.438284] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:05.273 [2024-12-05 14:15:11.438289] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.215 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.215 [2024-12-05 14:15:12.504521] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:06.215 [2024-12-05 14:15:12.504538] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.215 [2024-12-05 14:15:12.507740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.215 [2024-12-05 14:15:12.507757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.215 [2024-12-05 14:15:12.507765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.215 [2024-12-05 14:15:12.507770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.215 [2024-12-05 14:15:12.507776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.215 [2024-12-05 14:15:12.507781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.215 [2024-12-05 14:15:12.507787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.215 [2024-12-05 14:15:12.507793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.216 [2024-12-05 14:15:12.507798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.216 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.216 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.216 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:06.216 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.216 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.216 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.477 [2024-12-05 14:15:12.517754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.477 [2024-12-05 14:15:12.527787] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.477 [2024-12-05 14:15:12.527797] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.477 [2024-12-05 14:15:12.527802] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.527809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.477 [2024-12-05 14:15:12.527822] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.528172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.477 [2024-12-05 14:15:12.528183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.477 [2024-12-05 14:15:12.528190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.477 [2024-12-05 14:15:12.528199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.477 [2024-12-05 14:15:12.528210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.477 [2024-12-05 14:15:12.528215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.477 [2024-12-05 14:15:12.528221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.477 [2024-12-05 14:15:12.528227] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.477 [2024-12-05 14:15:12.528231] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.477 [2024-12-05 14:15:12.528234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.477 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.477 [2024-12-05 14:15:12.537851] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.477 [2024-12-05 14:15:12.537859] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.477 [2024-12-05 14:15:12.537862] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.537865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.477 [2024-12-05 14:15:12.537875] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.538156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.477 [2024-12-05 14:15:12.538164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.477 [2024-12-05 14:15:12.538170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.477 [2024-12-05 14:15:12.538177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.477 [2024-12-05 14:15:12.538185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.477 [2024-12-05 14:15:12.538189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.477 [2024-12-05 14:15:12.538194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.477 [2024-12-05 14:15:12.538199] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.477 [2024-12-05 14:15:12.538202] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.477 [2024-12-05 14:15:12.538205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.477 [2024-12-05 14:15:12.547904] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.477 [2024-12-05 14:15:12.547914] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.477 [2024-12-05 14:15:12.547917] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.547921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.477 [2024-12-05 14:15:12.547931] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.548312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.477 [2024-12-05 14:15:12.548320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.477 [2024-12-05 14:15:12.548326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.477 [2024-12-05 14:15:12.548336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.477 [2024-12-05 14:15:12.548344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.477 [2024-12-05 14:15:12.548348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.477 [2024-12-05 14:15:12.548354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.477 [2024-12-05 14:15:12.548358] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.477 [2024-12-05 14:15:12.548362] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.477 [2024-12-05 14:15:12.548365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.477 [2024-12-05 14:15:12.557960] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.477 [2024-12-05 14:15:12.557969] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.477 [2024-12-05 14:15:12.557972] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.557975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.477 [2024-12-05 14:15:12.557985] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.477 [2024-12-05 14:15:12.558269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.477 [2024-12-05 14:15:12.558278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.477 [2024-12-05 14:15:12.558283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.477 [2024-12-05 14:15:12.558291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.477 [2024-12-05 14:15:12.558299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.477 [2024-12-05 14:15:12.558303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.477 [2024-12-05 14:15:12.558309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.477 [2024-12-05 14:15:12.558313] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.477 [2024-12-05 14:15:12.558316] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.478 [2024-12-05 14:15:12.558319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.478 [2024-12-05 14:15:12.568014] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.478 [2024-12-05 14:15:12.568023] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.478 [2024-12-05 14:15:12.568026] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.478 [2024-12-05 14:15:12.568029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.478 [2024-12-05 14:15:12.568040] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.478 [2024-12-05 14:15:12.568271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.478 [2024-12-05 14:15:12.568280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.478 [2024-12-05 14:15:12.568285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.478 [2024-12-05 14:15:12.568293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.478 [2024-12-05 14:15:12.568301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.478 [2024-12-05 14:15:12.568305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.478 [2024-12-05 14:15:12.568311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.478 [2024-12-05 14:15:12.568315] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.478 [2024-12-05 14:15:12.568318] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.478 [2024-12-05 14:15:12.568321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.478 [2024-12-05 14:15:12.578069] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.478 [2024-12-05 14:15:12.578079] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.478 [2024-12-05 14:15:12.578083] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.478 [2024-12-05 14:15:12.578086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.478 [2024-12-05 14:15:12.578096] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.478 [2024-12-05 14:15:12.578389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.478 [2024-12-05 14:15:12.578397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.478 [2024-12-05 14:15:12.578403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.478 [2024-12-05 14:15:12.578411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.478 [2024-12-05 14:15:12.578418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.478 [2024-12-05 14:15:12.578429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.478 [2024-12-05 14:15:12.578434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.478 [2024-12-05 14:15:12.578438] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.478 [2024-12-05 14:15:12.578441] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.478 [2024-12-05 14:15:12.578445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.478 [2024-12-05 14:15:12.588125] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:06.478 [2024-12-05 14:15:12.588133] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:06.478 [2024-12-05 14:15:12.588136] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:06.478 [2024-12-05 14:15:12.588139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.478 [2024-12-05 14:15:12.588148] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:06.478 [2024-12-05 14:15:12.588438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.478 [2024-12-05 14:15:12.588447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bdc50 with addr=10.0.0.2, port=4420 00:25:06.478 [2024-12-05 14:15:12.588452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bdc50 is same with the state(6) to be set 00:25:06.478 [2024-12-05 14:15:12.588464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bdc50 (9): Bad file descriptor 00:25:06.478 [2024-12-05 14:15:12.588471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:06.478 [2024-12-05 14:15:12.588475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:06.478 [2024-12-05 14:15:12.588481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:06.478 [2024-12-05 14:15:12.588485] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:06.478 [2024-12-05 14:15:12.588488] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:06.478 [2024-12-05 14:15:12.588491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:06.478 [2024-12-05 14:15:12.591793] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:06.478 [2024-12-05 14:15:12.591805] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:06.478 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.479 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.739 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:06.739 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.739 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.740 14:15:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.681 [2024-12-05 14:15:13.902926] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:07.681 [2024-12-05 14:15:13.902941] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:07.681 [2024-12-05 14:15:13.902951] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:07.942 [2024-12-05 14:15:13.990186] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:08.204 [2024-12-05 14:15:14.262496] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:08.204 [2024-12-05 14:15:14.263167] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x9f9770:1 started. 00:25:08.204 [2024-12-05 14:15:14.264552] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:08.204 [2024-12-05 14:15:14.264574] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.204 [2024-12-05 14:15:14.271722] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x9f9770 was disconnected and freed. delete nvme_qpair. 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.204 request: 00:25:08.204 { 00:25:08.204 "name": "nvme", 00:25:08.204 "trtype": "tcp", 00:25:08.204 "traddr": "10.0.0.2", 00:25:08.204 "adrfam": "ipv4", 00:25:08.204 "trsvcid": "8009", 00:25:08.204 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:08.204 "wait_for_attach": true, 00:25:08.204 "method": "bdev_nvme_start_discovery", 00:25:08.204 "req_id": 1 00:25:08.204 } 00:25:08.204 Got JSON-RPC error response 00:25:08.204 response: 00:25:08.204 { 00:25:08.204 "code": -17, 00:25:08.204 "message": "File exists" 00:25:08.204 } 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.204 request: 00:25:08.204 { 00:25:08.204 "name": "nvme_second", 00:25:08.204 "trtype": "tcp", 00:25:08.204 "traddr": "10.0.0.2", 00:25:08.204 "adrfam": "ipv4", 00:25:08.204 "trsvcid": "8009", 00:25:08.204 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:08.204 "wait_for_attach": true, 00:25:08.204 "method": "bdev_nvme_start_discovery", 00:25:08.204 "req_id": 1 00:25:08.204 } 00:25:08.204 Got JSON-RPC error response 00:25:08.204 response: 00:25:08.204 { 00:25:08.204 "code": -17, 00:25:08.204 "message": "File exists" 00:25:08.204 } 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:08.204 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.205 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:08.466 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:08.466 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:08.466 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.466 14:15:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.406 [2024-12-05 14:15:15.507959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.406 [2024-12-05 14:15:15.507983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb0d0 with addr=10.0.0.2, port=8010 00:25:09.406 [2024-12-05 14:15:15.507994] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:09.406 [2024-12-05 14:15:15.507999] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:09.406 [2024-12-05 14:15:15.508004] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:10.345 [2024-12-05 14:15:16.510298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.345 [2024-12-05 14:15:16.510316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bb0d0 with addr=10.0.0.2, port=8010 00:25:10.345 [2024-12-05 14:15:16.510325] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:10.345 [2024-12-05 14:15:16.510330] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:10.345 [2024-12-05 14:15:16.510335] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:11.303 [2024-12-05 14:15:17.512298] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:11.303 request: 00:25:11.303 { 00:25:11.303 "name": "nvme_second", 00:25:11.303 "trtype": "tcp", 00:25:11.303 "traddr": "10.0.0.2", 00:25:11.303 "adrfam": "ipv4", 00:25:11.303 "trsvcid": "8010", 00:25:11.303 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:11.303 "wait_for_attach": false, 00:25:11.303 "attach_timeout_ms": 3000, 00:25:11.303 "method": "bdev_nvme_start_discovery", 00:25:11.303 "req_id": 1 00:25:11.303 } 00:25:11.303 Got JSON-RPC error response 00:25:11.303 response: 00:25:11.303 { 00:25:11.303 "code": -110, 00:25:11.303 "message": "Connection timed out" 00:25:11.303 } 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2848277 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:11.303 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:11.303 rmmod nvme_tcp 00:25:11.303 rmmod nvme_fabrics 00:25:11.562 rmmod nvme_keyring 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2848141 ']' 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2848141 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2848141 ']' 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2848141 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.562 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848141 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848141' 00:25:11.563 killing process with pid 2848141 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2848141 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2848141 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.563 14:15:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.106 00:25:14.106 real 0m21.462s 00:25:14.106 user 0m25.691s 00:25:14.106 sys 0m7.331s 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:14.106 ************************************ 00:25:14.106 END TEST nvmf_host_discovery 00:25:14.106 ************************************ 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.106 ************************************ 00:25:14.106 START TEST nvmf_host_multipath_status 00:25:14.106 ************************************ 00:25:14.106 14:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:14.106 * Looking for test storage... 00:25:14.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:14.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.106 --rc genhtml_branch_coverage=1 00:25:14.106 --rc genhtml_function_coverage=1 00:25:14.106 --rc genhtml_legend=1 00:25:14.106 --rc geninfo_all_blocks=1 00:25:14.106 --rc geninfo_unexecuted_blocks=1 00:25:14.106 00:25:14.106 ' 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:14.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.106 --rc genhtml_branch_coverage=1 00:25:14.106 --rc genhtml_function_coverage=1 00:25:14.106 --rc genhtml_legend=1 00:25:14.106 --rc geninfo_all_blocks=1 00:25:14.106 --rc geninfo_unexecuted_blocks=1 00:25:14.106 00:25:14.106 ' 00:25:14.106 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:14.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.106 --rc genhtml_branch_coverage=1 00:25:14.106 --rc genhtml_function_coverage=1 00:25:14.107 --rc genhtml_legend=1 00:25:14.107 --rc geninfo_all_blocks=1 00:25:14.107 --rc geninfo_unexecuted_blocks=1 00:25:14.107 00:25:14.107 ' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:14.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.107 --rc genhtml_branch_coverage=1 00:25:14.107 --rc genhtml_function_coverage=1 00:25:14.107 --rc genhtml_legend=1 00:25:14.107 --rc geninfo_all_blocks=1 00:25:14.107 --rc geninfo_unexecuted_blocks=1 00:25:14.107 00:25:14.107 ' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.107 14:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.243 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:22.244 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:22.244 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:22.244 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:22.244 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:25:22.244 00:25:22.244 --- 10.0.0.2 ping statistics --- 00:25:22.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.244 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:22.244 00:25:22.244 --- 10.0.0.1 ping statistics --- 00:25:22.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.244 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.244 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2855190 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2855190 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2855190 ']' 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.245 14:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:22.245 [2024-12-05 14:15:27.817526] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:25:22.245 [2024-12-05 14:15:27.817595] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.245 [2024-12-05 14:15:27.919050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.245 [2024-12-05 14:15:27.970718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.245 [2024-12-05 14:15:27.970769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.245 [2024-12-05 14:15:27.970777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.245 [2024-12-05 14:15:27.970784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.245 [2024-12-05 14:15:27.970791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.245 [2024-12-05 14:15:27.972531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.245 [2024-12-05 14:15:27.972586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2855190 00:25:22.505 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:22.766 [2024-12-05 14:15:28.849020] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.766 14:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:23.027 Malloc0 00:25:23.027 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:23.027 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.287 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.546 [2024-12-05 14:15:29.668574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.546 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:23.807 [2024-12-05 14:15:29.861049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2855621 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2855621 /var/tmp/bdevperf.sock 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2855621 ']' 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.807 14:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.751 14:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.751 14:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:24.751 14:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:24.751 14:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:25.322 Nvme0n1 00:25:25.322 14:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:25.582 Nvme0n1 00:25:25.583 14:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:25.583 14:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:27.571 14:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:27.571 14:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:27.831 14:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.831 14:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:28.775 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:28.775 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:29.035 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.035 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.035 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.035 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.035 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.035 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.294 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.294 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.294 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.294 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.555 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.815 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.815 14:15:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.815 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.815 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.076 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.076 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:30.076 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.076 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:30.337 14:15:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:31.281 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:31.281 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:31.281 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.281 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.541 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.541 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:31.541 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.541 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:31.802 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.802 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:31.802 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.802 14:15:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:31.802 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:31.802 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:31.802 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.802 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.063 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.063 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.063 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.063 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.323 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.323 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.323 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.323 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.584 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.584 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:32.584 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:32.584 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:32.844 14:15:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:33.788 14:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:33.788 14:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:33.788 14:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.788 14:15:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.047 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.047 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:34.047 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.047 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.307 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.308 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:34.568 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.568 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:34.568 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.568 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.828 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.828 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:34.828 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.828 14:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:34.828 14:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.828 14:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:34.828 14:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.089 14:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:35.350 14:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:36.290 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:36.290 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:36.290 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.290 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.550 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.550 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:36.550 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.550 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:36.550 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:36.550 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.811 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.811 14:15:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.811 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.811 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.811 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.811 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.072 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.072 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.072 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.072 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:37.332 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:37.593 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:37.854 14:15:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:38.795 14:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:38.795 14:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:38.795 14:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.795 14:15:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.054 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.314 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.314 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.314 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.314 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.573 14:15:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:39.832 14:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.832 14:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:39.832 14:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:40.092 14:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.352 14:15:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:41.291 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:41.291 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:41.291 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.291 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.552 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.811 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.811 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.811 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.811 14:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.072 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.332 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.332 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:42.591 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:42.591 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:42.591 14:15:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:42.850 14:15:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:43.789 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:43.789 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:43.789 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.789 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.048 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.048 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.048 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.048 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.307 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.307 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.307 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.307 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.307 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.307 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.308 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.308 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:44.569 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.569 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:44.569 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.569 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.830 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.830 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:44.830 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.830 14:15:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.089 14:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.089 14:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:45.089 14:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.089 14:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:45.348 14:15:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:46.289 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:46.289 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:46.289 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.289 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.549 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.549 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:46.549 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.549 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.810 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.810 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.810 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.810 14:15:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.810 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.810 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:46.810 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.810 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.072 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.072 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.072 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.072 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:47.357 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:47.619 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:47.880 14:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:48.822 14:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:48.822 14:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:48.822 14:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.822 14:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.082 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.342 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.342 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.342 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.342 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.602 14:15:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.862 14:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.862 14:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:49.862 14:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:50.121 14:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:50.121 14:15:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.503 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:51.763 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.763 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:51.763 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.763 14:15:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.024 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2855621 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2855621 ']' 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2855621 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2855621 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2855621' 00:25:52.285 killing process with pid 2855621 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2855621 00:25:52.285 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2855621 00:25:52.285 { 00:25:52.285 "results": [ 00:25:52.285 { 00:25:52.285 "job": "Nvme0n1", 00:25:52.285 "core_mask": "0x4", 00:25:52.285 "workload": "verify", 00:25:52.285 "status": "terminated", 00:25:52.285 "verify_range": { 00:25:52.285 "start": 0, 00:25:52.285 "length": 16384 00:25:52.285 }, 00:25:52.285 "queue_depth": 128, 00:25:52.285 "io_size": 4096, 00:25:52.285 "runtime": 26.742038, 00:25:52.285 "iops": 11859.155985044968, 00:25:52.285 "mibps": 46.32482806658191, 00:25:52.285 "io_failed": 0, 00:25:52.285 "io_timeout": 0, 00:25:52.285 "avg_latency_us": 10775.323775349112, 00:25:52.285 "min_latency_us": 655.36, 00:25:52.285 "max_latency_us": 3019898.88 00:25:52.285 } 00:25:52.285 ], 00:25:52.285 "core_count": 1 00:25:52.285 } 00:25:52.562 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2855621 00:25:52.562 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:52.562 [2024-12-05 14:15:29.940181] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:25:52.562 [2024-12-05 14:15:29.940257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855621 ] 00:25:52.562 [2024-12-05 14:15:30.033681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.562 [2024-12-05 14:15:30.089253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.562 Running I/O for 90 seconds... 00:25:52.562 10409.00 IOPS, 40.66 MiB/s [2024-12-05T13:15:58.862Z] 11016.50 IOPS, 43.03 MiB/s [2024-12-05T13:15:58.862Z] 11033.67 IOPS, 43.10 MiB/s [2024-12-05T13:15:58.862Z] 11461.75 IOPS, 44.77 MiB/s [2024-12-05T13:15:58.862Z] 11759.20 IOPS, 45.93 MiB/s [2024-12-05T13:15:58.862Z] 11909.17 IOPS, 46.52 MiB/s [2024-12-05T13:15:58.862Z] 12059.00 IOPS, 47.11 MiB/s [2024-12-05T13:15:58.862Z] 12164.00 IOPS, 47.52 MiB/s [2024-12-05T13:15:58.862Z] 12244.89 IOPS, 47.83 MiB/s [2024-12-05T13:15:58.862Z] 12317.70 IOPS, 48.12 MiB/s [2024-12-05T13:15:58.862Z] 12376.55 IOPS, 48.35 MiB/s [2024-12-05T13:15:58.862Z] [2024-12-05 14:15:43.732832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.732985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.732995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.562 [2024-12-05 14:15:43.733721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.562 [2024-12-05 14:15:43.733726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.733986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.733991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.563 [2024-12-05 14:15:43.734336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.563 [2024-12-05 14:15:43.734353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.563 [2024-12-05 14:15:43.734364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.563 [2024-12-05 14:15:43.734369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.564 [2024-12-05 14:15:43.734386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.564 [2024-12-05 14:15:43.734402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.564 [2024-12-05 14:15:43.734418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.564 [2024-12-05 14:15:43.734438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.564 [2024-12-05 14:15:43.734547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.734982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.734988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.564 [2024-12-05 14:15:43.735281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.564 [2024-12-05 14:15:43.735287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:43.735308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:43.735330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:43.735351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:43.735371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:43.735564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:43.735726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:43.735731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.565 12340.75 IOPS, 48.21 MiB/s [2024-12-05T13:15:58.865Z] 11391.46 IOPS, 44.50 MiB/s [2024-12-05T13:15:58.865Z] 10577.79 IOPS, 41.32 MiB/s [2024-12-05T13:15:58.865Z] 9927.73 IOPS, 38.78 MiB/s [2024-12-05T13:15:58.865Z] 10118.81 IOPS, 39.53 MiB/s [2024-12-05T13:15:58.865Z] 10290.59 IOPS, 40.20 MiB/s [2024-12-05T13:15:58.865Z] 10612.83 IOPS, 41.46 MiB/s [2024-12-05T13:15:58.865Z] 10929.00 IOPS, 42.69 MiB/s [2024-12-05T13:15:58.865Z] 11126.75 IOPS, 43.46 MiB/s [2024-12-05T13:15:58.865Z] 11195.29 IOPS, 43.73 MiB/s [2024-12-05T13:15:58.865Z] 11269.00 IOPS, 44.02 MiB/s [2024-12-05T13:15:58.865Z] 11479.09 IOPS, 44.84 MiB/s [2024-12-05T13:15:58.865Z] 11685.50 IOPS, 45.65 MiB/s [2024-12-05T13:15:58.865Z] [2024-12-05 14:15:56.342712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.565 [2024-12-05 14:15:56.342945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:56.342961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:56.342976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.565 [2024-12-05 14:15:56.342987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.565 [2024-12-05 14:15:56.342993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.566 [2024-12-05 14:15:56.343790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.566 [2024-12-05 14:15:56.343801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.343988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.343993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.567 [2024-12-05 14:15:56.344521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.567 [2024-12-05 14:15:56.344538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.567 [2024-12-05 14:15:56.344554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.567 [2024-12-05 14:15:56.344570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.567 [2024-12-05 14:15:56.344585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.567 [2024-12-05 14:15:56.344600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.344786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.344791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.346827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.346844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.346863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.346873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.346879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.346889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.346895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.346905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.346910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.567 [2024-12-05 14:15:56.346921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.567 [2024-12-05 14:15:56.346926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.346937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.346945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.346955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.346961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.346971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.346977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.346987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.346992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.568 [2024-12-05 14:15:56.347339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.568 [2024-12-05 14:15:56.347542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.568 [2024-12-05 14:15:56.347548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.349964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.349980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.349990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.349995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.350307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.350335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.569 [2024-12-05 14:15:56.350340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.351073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.351085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.569 [2024-12-05 14:15:56.351097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.569 [2024-12-05 14:15:56.351102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.351308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.351324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.351340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.351355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.351370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.351386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.351480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.351485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.570 [2024-12-05 14:15:56.352269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.352284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.352302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.352318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.352333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.352348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.570 [2024-12-05 14:15:56.352359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.570 [2024-12-05 14:15:56.352364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.352461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.352510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.352525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.352541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.352551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.352556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.353103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.353120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.353136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.353168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.353335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.353340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.363342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.363366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.363388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.571 [2024-12-05 14:15:56.363409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.363434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.363462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.363484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.571 [2024-12-05 14:15:56.363967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.571 [2024-12-05 14:15:56.363983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.363990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.364305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.364325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.364346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.364432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.364452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.364473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.364480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.365864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.365886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.365907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.365928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.365949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.365969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.365983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.365990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.366011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.572 [2024-12-05 14:15:56.366031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.366055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.366076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.366096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.366117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.366137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.572 [2024-12-05 14:15:56.366158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.572 [2024-12-05 14:15:56.366172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.366220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.366282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.366451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.366497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.366511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.366518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.368957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.368991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.368998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.369011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.573 [2024-12-05 14:15:56.369018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.369032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.369039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.369053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.573 [2024-12-05 14:15:56.369060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.573 [2024-12-05 14:15:56.369074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.369350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.369384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.369391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.370285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.370478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.370499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.370519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.370542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.370556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.370563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.371116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.371137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.371158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.371178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.371199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.371220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.574 [2024-12-05 14:15:56.371240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.574 [2024-12-05 14:15:56.371261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.574 [2024-12-05 14:15:56.371275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.371366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.371407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.371495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.371516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.371536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.371573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.371579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.372657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.575 [2024-12-05 14:15:56.372866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.372890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.372904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.372911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.575 [2024-12-05 14:15:56.373791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.575 [2024-12-05 14:15:56.373803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.373809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.373827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.373846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.373867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.373886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.373904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.373923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.373941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.373960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.373978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.373990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.373997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.374730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.374743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.374750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.375921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.375934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.375948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.375954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.375967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.375973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.375985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.375991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.376004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.376010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.376022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.576 [2024-12-05 14:15:56.376029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.376041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.376047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.576 [2024-12-05 14:15:56.376059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.576 [2024-12-05 14:15:56.376065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.376401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.376452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.376463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.377391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.377475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.377493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.377512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.377531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.377549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.377655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.377661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.378372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.577 [2024-12-05 14:15:56.378384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.378403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.577 [2024-12-05 14:15:56.378418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.577 [2024-12-05 14:15:56.378424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.378466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.378485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.378559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.378578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.378633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.378654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.378983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.378992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.379295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.379344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.379350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.380481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.380501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.578 [2024-12-05 14:15:56.380520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.380541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.380560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.380579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.578 [2024-12-05 14:15:56.380591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.578 [2024-12-05 14:15:56.380597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.380972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.380984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.380990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.381004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.381010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.381023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.381029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.382253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.579 [2024-12-05 14:15:56.382403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.382421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.382443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.579 [2024-12-05 14:15:56.382460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.579 [2024-12-05 14:15:56.382467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.382485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.382615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.382634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.382665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.382673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.383711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.384623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.384640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.384655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.384670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.384686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.384703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.384718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.580 [2024-12-05 14:15:56.384733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.580 [2024-12-05 14:15:56.384743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.580 [2024-12-05 14:15:56.384748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.384758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.384763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.384773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.384778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.384788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.384793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.384804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.384809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.384819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.384824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.384834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.384839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.385401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.385412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.385417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.581 [2024-12-05 14:15:56.386400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.581 [2024-12-05 14:15:56.386492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.581 [2024-12-05 14:15:56.386497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.386507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.386512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.386522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.386527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.386538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.386543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.386553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.386558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.386578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.386583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.387745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.387771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.387776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.388467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.388484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.388500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.388516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.388532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.388547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.582 [2024-12-05 14:15:56.388563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.582 [2024-12-05 14:15:56.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.582 [2024-12-05 14:15:56.388589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.388594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.388604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.388610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.388963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.388972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.388983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.388988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.388998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.389316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.389327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.389332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.390128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.390144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.390159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.390177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.390192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.583 [2024-12-05 14:15:56.390300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.583 [2024-12-05 14:15:56.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:52.583 [2024-12-05 14:15:56.390325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.390330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.390346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.390898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.390915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.390930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.390945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.390961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.390976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.390986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.390991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.584 [2024-12-05 14:15:56.391665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:52.584 [2024-12-05 14:15:56.391722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.584 [2024-12-05 14:15:56.391728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.391738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.391743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.391753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.391759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.391769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.391774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.391785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.391790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.391802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.391808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.392973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.392984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.392989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.393004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.393020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.393036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.585 [2024-12-05 14:15:56.393052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.393678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.393695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:52.585 [2024-12-05 14:15:56.393705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.585 [2024-12-05 14:15:56.393710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.393721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.393726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.393736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.393744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.393755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.393760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.393770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.393776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.393786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.393791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.393802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.393807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.394849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.394865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.394945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.394987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.394992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.586 [2024-12-05 14:15:56.395226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.395972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.395985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.396007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:52.586 [2024-12-05 14:15:56.396022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.586 [2024-12-05 14:15:56.396030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.587 [2024-12-05 14:15:56.396091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.587 [2024-12-05 14:15:56.396106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.587 [2024-12-05 14:15:56.396122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.587 [2024-12-05 14:15:56.396579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.587 [2024-12-05 14:15:56.396596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:52.587 [2024-12-05 14:15:56.396688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:52.587 [2024-12-05 14:15:56.396693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:52.587 11790.40 IOPS, 46.06 MiB/s [2024-12-05T13:15:58.887Z] 11826.69 IOPS, 46.20 MiB/s [2024-12-05T13:15:58.887Z] Received shutdown signal, test time was about 26.742648 seconds 00:25:52.587 00:25:52.587 Latency(us) 00:25:52.587 [2024-12-05T13:15:58.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.587 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:52.587 Verification LBA range: start 0x0 length 0x4000 00:25:52.587 Nvme0n1 : 26.74 11859.16 46.32 0.00 0.00 10775.32 655.36 3019898.88 00:25:52.587 [2024-12-05T13:15:58.887Z] =================================================================================================================== 00:25:52.587 [2024-12-05T13:15:58.887Z] Total : 11859.16 46.32 0.00 0.00 10775.32 655.36 3019898.88 00:25:52.587 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:52.848 rmmod nvme_tcp 00:25:52.848 rmmod nvme_fabrics 00:25:52.848 rmmod nvme_keyring 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2855190 ']' 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2855190 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2855190 ']' 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2855190 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:52.848 14:15:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2855190 00:25:52.848 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:52.848 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:52.848 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2855190' 00:25:52.848 killing process with pid 2855190 00:25:52.848 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2855190 00:25:52.848 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2855190 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.849 14:15:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:55.393 00:25:55.393 real 0m41.227s 00:25:55.393 user 1m46.711s 00:25:55.393 sys 0m11.434s 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:55.393 ************************************ 00:25:55.393 END TEST nvmf_host_multipath_status 00:25:55.393 ************************************ 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.393 ************************************ 00:25:55.393 START TEST nvmf_discovery_remove_ifc 00:25:55.393 ************************************ 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:55.393 * Looking for test storage... 00:25:55.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:55.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.393 --rc genhtml_branch_coverage=1 00:25:55.393 --rc genhtml_function_coverage=1 00:25:55.393 --rc genhtml_legend=1 00:25:55.393 --rc geninfo_all_blocks=1 00:25:55.393 --rc geninfo_unexecuted_blocks=1 00:25:55.393 00:25:55.393 ' 00:25:55.393 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:55.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.393 --rc genhtml_branch_coverage=1 00:25:55.393 --rc genhtml_function_coverage=1 00:25:55.393 --rc genhtml_legend=1 00:25:55.393 --rc geninfo_all_blocks=1 00:25:55.394 --rc geninfo_unexecuted_blocks=1 00:25:55.394 00:25:55.394 ' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.394 --rc genhtml_branch_coverage=1 00:25:55.394 --rc genhtml_function_coverage=1 00:25:55.394 --rc genhtml_legend=1 00:25:55.394 --rc geninfo_all_blocks=1 00:25:55.394 --rc geninfo_unexecuted_blocks=1 00:25:55.394 00:25:55.394 ' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.394 --rc genhtml_branch_coverage=1 00:25:55.394 --rc genhtml_function_coverage=1 00:25:55.394 --rc genhtml_legend=1 00:25:55.394 --rc geninfo_all_blocks=1 00:25:55.394 --rc geninfo_unexecuted_blocks=1 00:25:55.394 00:25:55.394 ' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.394 14:16:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:03.562 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:03.562 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.562 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:03.563 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:03.563 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:03.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:26:03.563 00:26:03.563 --- 10.0.0.2 ping statistics --- 00:26:03.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.563 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:26:03.563 00:26:03.563 --- 10.0.0.1 ping statistics --- 00:26:03.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.563 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2865515 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2865515 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2865515 ']' 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.563 14:16:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.563 [2024-12-05 14:16:09.042238] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:26:03.563 [2024-12-05 14:16:09.042300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.563 [2024-12-05 14:16:09.140520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.563 [2024-12-05 14:16:09.191246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.563 [2024-12-05 14:16:09.191295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.563 [2024-12-05 14:16:09.191304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.563 [2024-12-05 14:16:09.191311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.563 [2024-12-05 14:16:09.191317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.563 [2024-12-05 14:16:09.192048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.563 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.563 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:03.563 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.563 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.563 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.825 [2024-12-05 14:16:09.910945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.825 [2024-12-05 14:16:09.919173] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:03.825 null0 00:26:03.825 [2024-12-05 14:16:09.951138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2865747 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2865747 /tmp/host.sock 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2865747 ']' 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:03.825 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.825 14:16:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.825 [2024-12-05 14:16:10.029829] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:26:03.825 [2024-12-05 14:16:10.029897] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2865747 ] 00:26:04.086 [2024-12-05 14:16:10.125026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.086 [2024-12-05 14:16:10.190351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.658 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.919 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.919 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:04.919 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.919 14:16:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.865 [2024-12-05 14:16:12.019638] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:05.865 [2024-12-05 14:16:12.019669] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:05.865 [2024-12-05 14:16:12.019690] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.865 [2024-12-05 14:16:12.107955] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:06.126 [2024-12-05 14:16:12.290270] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:06.126 [2024-12-05 14:16:12.291271] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2038250:1 started. 00:26:06.126 [2024-12-05 14:16:12.292882] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:06.126 [2024-12-05 14:16:12.292930] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:06.126 [2024-12-05 14:16:12.292952] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:06.126 [2024-12-05 14:16:12.292967] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:06.126 [2024-12-05 14:16:12.292988] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.126 [2024-12-05 14:16:12.299117] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2038250 was disconnected and freed. delete nvme_qpair. 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:06.126 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:06.387 14:16:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:07.331 14:16:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.717 14:16:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:09.656 14:16:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:10.633 14:16:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:11.576 [2024-12-05 14:16:17.733414] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:11.576 [2024-12-05 14:16:17.733445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.576 [2024-12-05 14:16:17.733456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.576 [2024-12-05 14:16:17.733464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.576 [2024-12-05 14:16:17.733469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.577 [2024-12-05 14:16:17.733475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.577 [2024-12-05 14:16:17.733480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.577 [2024-12-05 14:16:17.733485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.577 [2024-12-05 14:16:17.733491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.577 [2024-12-05 14:16:17.733496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.577 [2024-12-05 14:16:17.733501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.577 [2024-12-05 14:16:17.733506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2014a50 is same with the state(6) to be set 00:26:11.577 [2024-12-05 14:16:17.743435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014a50 (9): Bad file descriptor 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.577 14:16:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.577 [2024-12-05 14:16:17.753469] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:11.577 [2024-12-05 14:16:17.753478] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:11.577 [2024-12-05 14:16:17.753483] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:11.577 [2024-12-05 14:16:17.753488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:11.577 [2024-12-05 14:16:17.753503] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:12.647 [2024-12-05 14:16:18.756536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:12.647 [2024-12-05 14:16:18.756630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2014a50 with addr=10.0.0.2, port=4420 00:26:12.647 [2024-12-05 14:16:18.756662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2014a50 is same with the state(6) to be set 00:26:12.647 [2024-12-05 14:16:18.756719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014a50 (9): Bad file descriptor 00:26:12.647 [2024-12-05 14:16:18.757837] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:12.647 [2024-12-05 14:16:18.757908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:12.647 [2024-12-05 14:16:18.757930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:12.647 [2024-12-05 14:16:18.757954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:12.647 [2024-12-05 14:16:18.757975] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:12.647 [2024-12-05 14:16:18.757991] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:12.647 [2024-12-05 14:16:18.758005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:12.647 [2024-12-05 14:16:18.758027] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:12.647 [2024-12-05 14:16:18.758042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:12.647 14:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.647 14:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.647 14:16:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.589 [2024-12-05 14:16:19.760463] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:13.589 [2024-12-05 14:16:19.760479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:13.589 [2024-12-05 14:16:19.760487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:13.589 [2024-12-05 14:16:19.760493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:13.589 [2024-12-05 14:16:19.760498] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:13.589 [2024-12-05 14:16:19.760503] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:13.589 [2024-12-05 14:16:19.760507] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:13.590 [2024-12-05 14:16:19.760514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:13.590 [2024-12-05 14:16:19.760530] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:13.590 [2024-12-05 14:16:19.760547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.590 [2024-12-05 14:16:19.760555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.590 [2024-12-05 14:16:19.760563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.590 [2024-12-05 14:16:19.760568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.590 [2024-12-05 14:16:19.760573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.590 [2024-12-05 14:16:19.760578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.590 [2024-12-05 14:16:19.760584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.590 [2024-12-05 14:16:19.760590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.590 [2024-12-05 14:16:19.760595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:13.590 [2024-12-05 14:16:19.760601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:13.590 [2024-12-05 14:16:19.760606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:13.590 [2024-12-05 14:16:19.761006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20041a0 (9): Bad file descriptor 00:26:13.590 [2024-12-05 14:16:19.762016] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:13.590 [2024-12-05 14:16:19.762024] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.590 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:13.850 14:16:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.792 14:16:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.792 14:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.792 14:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:14.792 14:16:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.732 [2024-12-05 14:16:21.774960] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:15.732 [2024-12-05 14:16:21.774977] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:15.732 [2024-12-05 14:16:21.774987] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:15.733 [2024-12-05 14:16:21.905360] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.993 [2024-12-05 14:16:22.085465] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:15.993 [2024-12-05 14:16:22.086157] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x20419c0:1 started. 00:26:15.993 [2024-12-05 14:16:22.087060] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:15.993 [2024-12-05 14:16:22.087088] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:15.993 [2024-12-05 14:16:22.087102] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:15.993 [2024-12-05 14:16:22.087113] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:15.993 [2024-12-05 14:16:22.087123] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:15.993 [2024-12-05 14:16:22.093305] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x20419c0 was disconnected and freed. delete nvme_qpair. 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:15.993 14:16:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2865747 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2865747 ']' 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2865747 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865747 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865747' 00:26:16.934 killing process with pid 2865747 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2865747 00:26:16.934 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2865747 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.195 rmmod nvme_tcp 00:26:17.195 rmmod nvme_fabrics 00:26:17.195 rmmod nvme_keyring 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2865515 ']' 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2865515 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2865515 ']' 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2865515 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2865515 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2865515' 00:26:17.195 killing process with pid 2865515 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2865515 00:26:17.195 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2865515 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.455 14:16:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.366 14:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:19.366 00:26:19.366 real 0m24.353s 00:26:19.366 user 0m29.519s 00:26:19.366 sys 0m7.084s 00:26:19.366 14:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.366 14:16:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.366 ************************************ 00:26:19.366 END TEST nvmf_discovery_remove_ifc 00:26:19.366 ************************************ 00:26:19.626 14:16:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:19.626 14:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.627 ************************************ 00:26:19.627 START TEST nvmf_identify_kernel_target 00:26:19.627 ************************************ 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:19.627 * Looking for test storage... 00:26:19.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:19.627 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.888 --rc genhtml_branch_coverage=1 00:26:19.888 --rc genhtml_function_coverage=1 00:26:19.888 --rc genhtml_legend=1 00:26:19.888 --rc geninfo_all_blocks=1 00:26:19.888 --rc geninfo_unexecuted_blocks=1 00:26:19.888 00:26:19.888 ' 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.888 --rc genhtml_branch_coverage=1 00:26:19.888 --rc genhtml_function_coverage=1 00:26:19.888 --rc genhtml_legend=1 00:26:19.888 --rc geninfo_all_blocks=1 00:26:19.888 --rc geninfo_unexecuted_blocks=1 00:26:19.888 00:26:19.888 ' 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.888 --rc genhtml_branch_coverage=1 00:26:19.888 --rc genhtml_function_coverage=1 00:26:19.888 --rc genhtml_legend=1 00:26:19.888 --rc geninfo_all_blocks=1 00:26:19.888 --rc geninfo_unexecuted_blocks=1 00:26:19.888 00:26:19.888 ' 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:19.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.888 --rc genhtml_branch_coverage=1 00:26:19.888 --rc genhtml_function_coverage=1 00:26:19.888 --rc genhtml_legend=1 00:26:19.888 --rc geninfo_all_blocks=1 00:26:19.888 --rc geninfo_unexecuted_blocks=1 00:26:19.888 00:26:19.888 ' 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.888 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:19.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:19.889 14:16:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.029 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:28.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:28.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:28.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:28.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:26:28.030 00:26:28.030 --- 10.0.0.2 ping statistics --- 00:26:28.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.030 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:26:28.030 00:26:28.030 --- 10.0.0.1 ping statistics --- 00:26:28.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.030 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.030 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:28.031 14:16:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:30.589 Waiting for block devices as requested 00:26:30.589 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:30.849 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:30.849 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:30.849 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:31.110 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:31.110 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:31.110 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:31.401 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:31.401 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:31.401 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:31.661 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:31.661 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:31.661 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:31.921 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:31.921 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:31.921 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:32.181 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:32.181 No valid GPT data, bailing 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:32.181 00:26:32.181 Discovery Log Number of Records 2, Generation counter 2 00:26:32.181 =====Discovery Log Entry 0====== 00:26:32.181 trtype: tcp 00:26:32.181 adrfam: ipv4 00:26:32.181 subtype: current discovery subsystem 00:26:32.181 treq: not specified, sq flow control disable supported 00:26:32.181 portid: 1 00:26:32.181 trsvcid: 4420 00:26:32.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:32.181 traddr: 10.0.0.1 00:26:32.181 eflags: none 00:26:32.181 sectype: none 00:26:32.181 =====Discovery Log Entry 1====== 00:26:32.181 trtype: tcp 00:26:32.181 adrfam: ipv4 00:26:32.181 subtype: nvme subsystem 00:26:32.181 treq: not specified, sq flow control disable supported 00:26:32.181 portid: 1 00:26:32.181 trsvcid: 4420 00:26:32.181 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:32.181 traddr: 10.0.0.1 00:26:32.181 eflags: none 00:26:32.181 sectype: none 00:26:32.181 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:32.181 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:32.443 ===================================================== 00:26:32.443 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:32.443 ===================================================== 00:26:32.443 Controller Capabilities/Features 00:26:32.443 ================================ 00:26:32.443 Vendor ID: 0000 00:26:32.443 Subsystem Vendor ID: 0000 00:26:32.443 Serial Number: 6762d947c9e0aac19571 00:26:32.443 Model Number: Linux 00:26:32.443 Firmware Version: 6.8.9-20 00:26:32.443 Recommended Arb Burst: 0 00:26:32.443 IEEE OUI Identifier: 00 00 00 00:26:32.443 Multi-path I/O 00:26:32.443 May have multiple subsystem ports: No 00:26:32.443 May have multiple controllers: No 00:26:32.443 Associated with SR-IOV VF: No 00:26:32.443 Max Data Transfer Size: Unlimited 00:26:32.443 Max Number of Namespaces: 0 00:26:32.443 Max Number of I/O Queues: 1024 00:26:32.443 NVMe Specification Version (VS): 1.3 00:26:32.443 NVMe Specification Version (Identify): 1.3 00:26:32.443 Maximum Queue Entries: 1024 00:26:32.443 Contiguous Queues Required: No 00:26:32.443 Arbitration Mechanisms Supported 00:26:32.443 Weighted Round Robin: Not Supported 00:26:32.443 Vendor Specific: Not Supported 00:26:32.443 Reset Timeout: 7500 ms 00:26:32.443 Doorbell Stride: 4 bytes 00:26:32.443 NVM Subsystem Reset: Not Supported 00:26:32.443 Command Sets Supported 00:26:32.443 NVM Command Set: Supported 00:26:32.443 Boot Partition: Not Supported 00:26:32.443 Memory Page Size Minimum: 4096 bytes 00:26:32.443 Memory Page Size Maximum: 4096 bytes 00:26:32.443 Persistent Memory Region: Not Supported 00:26:32.443 Optional Asynchronous Events Supported 00:26:32.443 Namespace Attribute Notices: Not Supported 00:26:32.443 Firmware Activation Notices: Not Supported 00:26:32.443 ANA Change Notices: Not Supported 00:26:32.443 PLE Aggregate Log Change Notices: Not Supported 00:26:32.443 LBA Status Info Alert Notices: Not Supported 00:26:32.443 EGE Aggregate Log Change Notices: Not Supported 00:26:32.443 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.443 Zone Descriptor Change Notices: Not Supported 00:26:32.443 Discovery Log Change Notices: Supported 00:26:32.443 Controller Attributes 00:26:32.443 128-bit Host Identifier: Not Supported 00:26:32.443 Non-Operational Permissive Mode: Not Supported 00:26:32.443 NVM Sets: Not Supported 00:26:32.443 Read Recovery Levels: Not Supported 00:26:32.443 Endurance Groups: Not Supported 00:26:32.443 Predictable Latency Mode: Not Supported 00:26:32.443 Traffic Based Keep ALive: Not Supported 00:26:32.443 Namespace Granularity: Not Supported 00:26:32.443 SQ Associations: Not Supported 00:26:32.443 UUID List: Not Supported 00:26:32.443 Multi-Domain Subsystem: Not Supported 00:26:32.443 Fixed Capacity Management: Not Supported 00:26:32.443 Variable Capacity Management: Not Supported 00:26:32.443 Delete Endurance Group: Not Supported 00:26:32.443 Delete NVM Set: Not Supported 00:26:32.443 Extended LBA Formats Supported: Not Supported 00:26:32.443 Flexible Data Placement Supported: Not Supported 00:26:32.443 00:26:32.443 Controller Memory Buffer Support 00:26:32.443 ================================ 00:26:32.443 Supported: No 00:26:32.443 00:26:32.443 Persistent Memory Region Support 00:26:32.443 ================================ 00:26:32.443 Supported: No 00:26:32.443 00:26:32.443 Admin Command Set Attributes 00:26:32.443 ============================ 00:26:32.443 Security Send/Receive: Not Supported 00:26:32.443 Format NVM: Not Supported 00:26:32.443 Firmware Activate/Download: Not Supported 00:26:32.443 Namespace Management: Not Supported 00:26:32.443 Device Self-Test: Not Supported 00:26:32.443 Directives: Not Supported 00:26:32.443 NVMe-MI: Not Supported 00:26:32.443 Virtualization Management: Not Supported 00:26:32.443 Doorbell Buffer Config: Not Supported 00:26:32.443 Get LBA Status Capability: Not Supported 00:26:32.443 Command & Feature Lockdown Capability: Not Supported 00:26:32.443 Abort Command Limit: 1 00:26:32.443 Async Event Request Limit: 1 00:26:32.443 Number of Firmware Slots: N/A 00:26:32.443 Firmware Slot 1 Read-Only: N/A 00:26:32.443 Firmware Activation Without Reset: N/A 00:26:32.443 Multiple Update Detection Support: N/A 00:26:32.443 Firmware Update Granularity: No Information Provided 00:26:32.443 Per-Namespace SMART Log: No 00:26:32.443 Asymmetric Namespace Access Log Page: Not Supported 00:26:32.443 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:32.443 Command Effects Log Page: Not Supported 00:26:32.443 Get Log Page Extended Data: Supported 00:26:32.443 Telemetry Log Pages: Not Supported 00:26:32.443 Persistent Event Log Pages: Not Supported 00:26:32.443 Supported Log Pages Log Page: May Support 00:26:32.443 Commands Supported & Effects Log Page: Not Supported 00:26:32.443 Feature Identifiers & Effects Log Page:May Support 00:26:32.443 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.443 Data Area 4 for Telemetry Log: Not Supported 00:26:32.443 Error Log Page Entries Supported: 1 00:26:32.443 Keep Alive: Not Supported 00:26:32.443 00:26:32.443 NVM Command Set Attributes 00:26:32.443 ========================== 00:26:32.443 Submission Queue Entry Size 00:26:32.443 Max: 1 00:26:32.443 Min: 1 00:26:32.443 Completion Queue Entry Size 00:26:32.443 Max: 1 00:26:32.443 Min: 1 00:26:32.443 Number of Namespaces: 0 00:26:32.443 Compare Command: Not Supported 00:26:32.443 Write Uncorrectable Command: Not Supported 00:26:32.443 Dataset Management Command: Not Supported 00:26:32.443 Write Zeroes Command: Not Supported 00:26:32.443 Set Features Save Field: Not Supported 00:26:32.443 Reservations: Not Supported 00:26:32.443 Timestamp: Not Supported 00:26:32.443 Copy: Not Supported 00:26:32.443 Volatile Write Cache: Not Present 00:26:32.443 Atomic Write Unit (Normal): 1 00:26:32.443 Atomic Write Unit (PFail): 1 00:26:32.443 Atomic Compare & Write Unit: 1 00:26:32.443 Fused Compare & Write: Not Supported 00:26:32.443 Scatter-Gather List 00:26:32.443 SGL Command Set: Supported 00:26:32.443 SGL Keyed: Not Supported 00:26:32.443 SGL Bit Bucket Descriptor: Not Supported 00:26:32.443 SGL Metadata Pointer: Not Supported 00:26:32.443 Oversized SGL: Not Supported 00:26:32.443 SGL Metadata Address: Not Supported 00:26:32.443 SGL Offset: Supported 00:26:32.443 Transport SGL Data Block: Not Supported 00:26:32.443 Replay Protected Memory Block: Not Supported 00:26:32.443 00:26:32.443 Firmware Slot Information 00:26:32.443 ========================= 00:26:32.443 Active slot: 0 00:26:32.443 00:26:32.443 00:26:32.443 Error Log 00:26:32.443 ========= 00:26:32.443 00:26:32.443 Active Namespaces 00:26:32.443 ================= 00:26:32.443 Discovery Log Page 00:26:32.443 ================== 00:26:32.443 Generation Counter: 2 00:26:32.443 Number of Records: 2 00:26:32.443 Record Format: 0 00:26:32.443 00:26:32.443 Discovery Log Entry 0 00:26:32.443 ---------------------- 00:26:32.443 Transport Type: 3 (TCP) 00:26:32.443 Address Family: 1 (IPv4) 00:26:32.443 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:32.443 Entry Flags: 00:26:32.443 Duplicate Returned Information: 0 00:26:32.443 Explicit Persistent Connection Support for Discovery: 0 00:26:32.443 Transport Requirements: 00:26:32.443 Secure Channel: Not Specified 00:26:32.443 Port ID: 1 (0x0001) 00:26:32.443 Controller ID: 65535 (0xffff) 00:26:32.443 Admin Max SQ Size: 32 00:26:32.443 Transport Service Identifier: 4420 00:26:32.443 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:32.443 Transport Address: 10.0.0.1 00:26:32.443 Discovery Log Entry 1 00:26:32.443 ---------------------- 00:26:32.443 Transport Type: 3 (TCP) 00:26:32.443 Address Family: 1 (IPv4) 00:26:32.443 Subsystem Type: 2 (NVM Subsystem) 00:26:32.443 Entry Flags: 00:26:32.443 Duplicate Returned Information: 0 00:26:32.443 Explicit Persistent Connection Support for Discovery: 0 00:26:32.443 Transport Requirements: 00:26:32.443 Secure Channel: Not Specified 00:26:32.444 Port ID: 1 (0x0001) 00:26:32.444 Controller ID: 65535 (0xffff) 00:26:32.444 Admin Max SQ Size: 32 00:26:32.444 Transport Service Identifier: 4420 00:26:32.444 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:32.444 Transport Address: 10.0.0.1 00:26:32.444 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:32.444 get_feature(0x01) failed 00:26:32.444 get_feature(0x02) failed 00:26:32.444 get_feature(0x04) failed 00:26:32.444 ===================================================== 00:26:32.444 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:32.444 ===================================================== 00:26:32.444 Controller Capabilities/Features 00:26:32.444 ================================ 00:26:32.444 Vendor ID: 0000 00:26:32.444 Subsystem Vendor ID: 0000 00:26:32.444 Serial Number: 59084b9078520d40a543 00:26:32.444 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:32.444 Firmware Version: 6.8.9-20 00:26:32.444 Recommended Arb Burst: 6 00:26:32.444 IEEE OUI Identifier: 00 00 00 00:26:32.444 Multi-path I/O 00:26:32.444 May have multiple subsystem ports: Yes 00:26:32.444 May have multiple controllers: Yes 00:26:32.444 Associated with SR-IOV VF: No 00:26:32.444 Max Data Transfer Size: Unlimited 00:26:32.444 Max Number of Namespaces: 1024 00:26:32.444 Max Number of I/O Queues: 128 00:26:32.444 NVMe Specification Version (VS): 1.3 00:26:32.444 NVMe Specification Version (Identify): 1.3 00:26:32.444 Maximum Queue Entries: 1024 00:26:32.444 Contiguous Queues Required: No 00:26:32.444 Arbitration Mechanisms Supported 00:26:32.444 Weighted Round Robin: Not Supported 00:26:32.444 Vendor Specific: Not Supported 00:26:32.444 Reset Timeout: 7500 ms 00:26:32.444 Doorbell Stride: 4 bytes 00:26:32.444 NVM Subsystem Reset: Not Supported 00:26:32.444 Command Sets Supported 00:26:32.444 NVM Command Set: Supported 00:26:32.444 Boot Partition: Not Supported 00:26:32.444 Memory Page Size Minimum: 4096 bytes 00:26:32.444 Memory Page Size Maximum: 4096 bytes 00:26:32.444 Persistent Memory Region: Not Supported 00:26:32.444 Optional Asynchronous Events Supported 00:26:32.444 Namespace Attribute Notices: Supported 00:26:32.444 Firmware Activation Notices: Not Supported 00:26:32.444 ANA Change Notices: Supported 00:26:32.444 PLE Aggregate Log Change Notices: Not Supported 00:26:32.444 LBA Status Info Alert Notices: Not Supported 00:26:32.444 EGE Aggregate Log Change Notices: Not Supported 00:26:32.444 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.444 Zone Descriptor Change Notices: Not Supported 00:26:32.444 Discovery Log Change Notices: Not Supported 00:26:32.444 Controller Attributes 00:26:32.444 128-bit Host Identifier: Supported 00:26:32.444 Non-Operational Permissive Mode: Not Supported 00:26:32.444 NVM Sets: Not Supported 00:26:32.444 Read Recovery Levels: Not Supported 00:26:32.444 Endurance Groups: Not Supported 00:26:32.444 Predictable Latency Mode: Not Supported 00:26:32.444 Traffic Based Keep ALive: Supported 00:26:32.444 Namespace Granularity: Not Supported 00:26:32.444 SQ Associations: Not Supported 00:26:32.444 UUID List: Not Supported 00:26:32.444 Multi-Domain Subsystem: Not Supported 00:26:32.444 Fixed Capacity Management: Not Supported 00:26:32.444 Variable Capacity Management: Not Supported 00:26:32.444 Delete Endurance Group: Not Supported 00:26:32.444 Delete NVM Set: Not Supported 00:26:32.444 Extended LBA Formats Supported: Not Supported 00:26:32.444 Flexible Data Placement Supported: Not Supported 00:26:32.444 00:26:32.444 Controller Memory Buffer Support 00:26:32.444 ================================ 00:26:32.444 Supported: No 00:26:32.444 00:26:32.444 Persistent Memory Region Support 00:26:32.444 ================================ 00:26:32.444 Supported: No 00:26:32.444 00:26:32.444 Admin Command Set Attributes 00:26:32.444 ============================ 00:26:32.444 Security Send/Receive: Not Supported 00:26:32.444 Format NVM: Not Supported 00:26:32.444 Firmware Activate/Download: Not Supported 00:26:32.444 Namespace Management: Not Supported 00:26:32.444 Device Self-Test: Not Supported 00:26:32.444 Directives: Not Supported 00:26:32.444 NVMe-MI: Not Supported 00:26:32.444 Virtualization Management: Not Supported 00:26:32.444 Doorbell Buffer Config: Not Supported 00:26:32.444 Get LBA Status Capability: Not Supported 00:26:32.444 Command & Feature Lockdown Capability: Not Supported 00:26:32.444 Abort Command Limit: 4 00:26:32.444 Async Event Request Limit: 4 00:26:32.444 Number of Firmware Slots: N/A 00:26:32.444 Firmware Slot 1 Read-Only: N/A 00:26:32.444 Firmware Activation Without Reset: N/A 00:26:32.444 Multiple Update Detection Support: N/A 00:26:32.444 Firmware Update Granularity: No Information Provided 00:26:32.444 Per-Namespace SMART Log: Yes 00:26:32.444 Asymmetric Namespace Access Log Page: Supported 00:26:32.444 ANA Transition Time : 10 sec 00:26:32.444 00:26:32.444 Asymmetric Namespace Access Capabilities 00:26:32.444 ANA Optimized State : Supported 00:26:32.444 ANA Non-Optimized State : Supported 00:26:32.444 ANA Inaccessible State : Supported 00:26:32.444 ANA Persistent Loss State : Supported 00:26:32.444 ANA Change State : Supported 00:26:32.444 ANAGRPID is not changed : No 00:26:32.444 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:32.444 00:26:32.444 ANA Group Identifier Maximum : 128 00:26:32.444 Number of ANA Group Identifiers : 128 00:26:32.444 Max Number of Allowed Namespaces : 1024 00:26:32.444 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:32.444 Command Effects Log Page: Supported 00:26:32.444 Get Log Page Extended Data: Supported 00:26:32.444 Telemetry Log Pages: Not Supported 00:26:32.444 Persistent Event Log Pages: Not Supported 00:26:32.444 Supported Log Pages Log Page: May Support 00:26:32.444 Commands Supported & Effects Log Page: Not Supported 00:26:32.444 Feature Identifiers & Effects Log Page:May Support 00:26:32.444 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.444 Data Area 4 for Telemetry Log: Not Supported 00:26:32.444 Error Log Page Entries Supported: 128 00:26:32.444 Keep Alive: Supported 00:26:32.444 Keep Alive Granularity: 1000 ms 00:26:32.444 00:26:32.444 NVM Command Set Attributes 00:26:32.444 ========================== 00:26:32.444 Submission Queue Entry Size 00:26:32.444 Max: 64 00:26:32.444 Min: 64 00:26:32.444 Completion Queue Entry Size 00:26:32.444 Max: 16 00:26:32.444 Min: 16 00:26:32.444 Number of Namespaces: 1024 00:26:32.444 Compare Command: Not Supported 00:26:32.444 Write Uncorrectable Command: Not Supported 00:26:32.444 Dataset Management Command: Supported 00:26:32.444 Write Zeroes Command: Supported 00:26:32.444 Set Features Save Field: Not Supported 00:26:32.444 Reservations: Not Supported 00:26:32.444 Timestamp: Not Supported 00:26:32.444 Copy: Not Supported 00:26:32.444 Volatile Write Cache: Present 00:26:32.444 Atomic Write Unit (Normal): 1 00:26:32.444 Atomic Write Unit (PFail): 1 00:26:32.444 Atomic Compare & Write Unit: 1 00:26:32.444 Fused Compare & Write: Not Supported 00:26:32.444 Scatter-Gather List 00:26:32.444 SGL Command Set: Supported 00:26:32.444 SGL Keyed: Not Supported 00:26:32.444 SGL Bit Bucket Descriptor: Not Supported 00:26:32.444 SGL Metadata Pointer: Not Supported 00:26:32.444 Oversized SGL: Not Supported 00:26:32.444 SGL Metadata Address: Not Supported 00:26:32.444 SGL Offset: Supported 00:26:32.444 Transport SGL Data Block: Not Supported 00:26:32.444 Replay Protected Memory Block: Not Supported 00:26:32.444 00:26:32.444 Firmware Slot Information 00:26:32.444 ========================= 00:26:32.444 Active slot: 0 00:26:32.444 00:26:32.444 Asymmetric Namespace Access 00:26:32.444 =========================== 00:26:32.444 Change Count : 0 00:26:32.444 Number of ANA Group Descriptors : 1 00:26:32.444 ANA Group Descriptor : 0 00:26:32.444 ANA Group ID : 1 00:26:32.444 Number of NSID Values : 1 00:26:32.444 Change Count : 0 00:26:32.444 ANA State : 1 00:26:32.444 Namespace Identifier : 1 00:26:32.444 00:26:32.444 Commands Supported and Effects 00:26:32.444 ============================== 00:26:32.444 Admin Commands 00:26:32.444 -------------- 00:26:32.444 Get Log Page (02h): Supported 00:26:32.444 Identify (06h): Supported 00:26:32.444 Abort (08h): Supported 00:26:32.444 Set Features (09h): Supported 00:26:32.444 Get Features (0Ah): Supported 00:26:32.444 Asynchronous Event Request (0Ch): Supported 00:26:32.444 Keep Alive (18h): Supported 00:26:32.444 I/O Commands 00:26:32.444 ------------ 00:26:32.444 Flush (00h): Supported 00:26:32.444 Write (01h): Supported LBA-Change 00:26:32.444 Read (02h): Supported 00:26:32.444 Write Zeroes (08h): Supported LBA-Change 00:26:32.444 Dataset Management (09h): Supported 00:26:32.445 00:26:32.445 Error Log 00:26:32.445 ========= 00:26:32.445 Entry: 0 00:26:32.445 Error Count: 0x3 00:26:32.445 Submission Queue Id: 0x0 00:26:32.445 Command Id: 0x5 00:26:32.445 Phase Bit: 0 00:26:32.445 Status Code: 0x2 00:26:32.445 Status Code Type: 0x0 00:26:32.445 Do Not Retry: 1 00:26:32.445 Error Location: 0x28 00:26:32.445 LBA: 0x0 00:26:32.445 Namespace: 0x0 00:26:32.445 Vendor Log Page: 0x0 00:26:32.445 ----------- 00:26:32.445 Entry: 1 00:26:32.445 Error Count: 0x2 00:26:32.445 Submission Queue Id: 0x0 00:26:32.445 Command Id: 0x5 00:26:32.445 Phase Bit: 0 00:26:32.445 Status Code: 0x2 00:26:32.445 Status Code Type: 0x0 00:26:32.445 Do Not Retry: 1 00:26:32.445 Error Location: 0x28 00:26:32.445 LBA: 0x0 00:26:32.445 Namespace: 0x0 00:26:32.445 Vendor Log Page: 0x0 00:26:32.445 ----------- 00:26:32.445 Entry: 2 00:26:32.445 Error Count: 0x1 00:26:32.445 Submission Queue Id: 0x0 00:26:32.445 Command Id: 0x4 00:26:32.445 Phase Bit: 0 00:26:32.445 Status Code: 0x2 00:26:32.445 Status Code Type: 0x0 00:26:32.445 Do Not Retry: 1 00:26:32.445 Error Location: 0x28 00:26:32.445 LBA: 0x0 00:26:32.445 Namespace: 0x0 00:26:32.445 Vendor Log Page: 0x0 00:26:32.445 00:26:32.445 Number of Queues 00:26:32.445 ================ 00:26:32.445 Number of I/O Submission Queues: 128 00:26:32.445 Number of I/O Completion Queues: 128 00:26:32.445 00:26:32.445 ZNS Specific Controller Data 00:26:32.445 ============================ 00:26:32.445 Zone Append Size Limit: 0 00:26:32.445 00:26:32.445 00:26:32.445 Active Namespaces 00:26:32.445 ================= 00:26:32.445 get_feature(0x05) failed 00:26:32.445 Namespace ID:1 00:26:32.445 Command Set Identifier: NVM (00h) 00:26:32.445 Deallocate: Supported 00:26:32.445 Deallocated/Unwritten Error: Not Supported 00:26:32.445 Deallocated Read Value: Unknown 00:26:32.445 Deallocate in Write Zeroes: Not Supported 00:26:32.445 Deallocated Guard Field: 0xFFFF 00:26:32.445 Flush: Supported 00:26:32.445 Reservation: Not Supported 00:26:32.445 Namespace Sharing Capabilities: Multiple Controllers 00:26:32.445 Size (in LBAs): 3750748848 (1788GiB) 00:26:32.445 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:32.445 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:32.445 UUID: bf2e69a8-0642-4daf-89b6-fff053bad29a 00:26:32.445 Thin Provisioning: Not Supported 00:26:32.445 Per-NS Atomic Units: Yes 00:26:32.445 Atomic Write Unit (Normal): 8 00:26:32.445 Atomic Write Unit (PFail): 8 00:26:32.445 Preferred Write Granularity: 8 00:26:32.445 Atomic Compare & Write Unit: 8 00:26:32.445 Atomic Boundary Size (Normal): 0 00:26:32.445 Atomic Boundary Size (PFail): 0 00:26:32.445 Atomic Boundary Offset: 0 00:26:32.445 NGUID/EUI64 Never Reused: No 00:26:32.445 ANA group ID: 1 00:26:32.445 Namespace Write Protected: No 00:26:32.445 Number of LBA Formats: 1 00:26:32.445 Current LBA Format: LBA Format #00 00:26:32.445 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:32.445 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.445 rmmod nvme_tcp 00:26:32.445 rmmod nvme_fabrics 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.445 14:16:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:34.990 14:16:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:38.288 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:38.288 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:38.288 00:26:38.288 real 0m18.777s 00:26:38.288 user 0m5.102s 00:26:38.288 sys 0m10.793s 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:38.288 ************************************ 00:26:38.288 END TEST nvmf_identify_kernel_target 00:26:38.288 ************************************ 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.288 ************************************ 00:26:38.288 START TEST nvmf_auth_host 00:26:38.288 ************************************ 00:26:38.288 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:38.549 * Looking for test storage... 00:26:38.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:38.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.550 --rc genhtml_branch_coverage=1 00:26:38.550 --rc genhtml_function_coverage=1 00:26:38.550 --rc genhtml_legend=1 00:26:38.550 --rc geninfo_all_blocks=1 00:26:38.550 --rc geninfo_unexecuted_blocks=1 00:26:38.550 00:26:38.550 ' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:38.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.550 --rc genhtml_branch_coverage=1 00:26:38.550 --rc genhtml_function_coverage=1 00:26:38.550 --rc genhtml_legend=1 00:26:38.550 --rc geninfo_all_blocks=1 00:26:38.550 --rc geninfo_unexecuted_blocks=1 00:26:38.550 00:26:38.550 ' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:38.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.550 --rc genhtml_branch_coverage=1 00:26:38.550 --rc genhtml_function_coverage=1 00:26:38.550 --rc genhtml_legend=1 00:26:38.550 --rc geninfo_all_blocks=1 00:26:38.550 --rc geninfo_unexecuted_blocks=1 00:26:38.550 00:26:38.550 ' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:38.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.550 --rc genhtml_branch_coverage=1 00:26:38.550 --rc genhtml_function_coverage=1 00:26:38.550 --rc genhtml_legend=1 00:26:38.550 --rc geninfo_all_blocks=1 00:26:38.550 --rc geninfo_unexecuted_blocks=1 00:26:38.550 00:26:38.550 ' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.550 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:38.551 14:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:46.682 14:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:46.682 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:46.683 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:46.683 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:46.683 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:46.683 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:46.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:26:46.683 00:26:46.683 --- 10.0.0.2 ping statistics --- 00:26:46.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.683 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:46.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:26:46.683 00:26:46.683 --- 10.0.0.1 ping statistics --- 00:26:46.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.683 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2880044 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2880044 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2880044 ']' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.683 14:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.945 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.945 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:46.945 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:46.945 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.945 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9cedaa34d0ea0aea319f842d92b4659a 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ceX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9cedaa34d0ea0aea319f842d92b4659a 0 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9cedaa34d0ea0aea319f842d92b4659a 0 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9cedaa34d0ea0aea319f842d92b4659a 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ceX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ceX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ceX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=53e4f7ab2ea36a674f42fafbe707369223087b973ab59771113445824dd6a8ef 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pJq 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 53e4f7ab2ea36a674f42fafbe707369223087b973ab59771113445824dd6a8ef 3 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 53e4f7ab2ea36a674f42fafbe707369223087b973ab59771113445824dd6a8ef 3 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=53e4f7ab2ea36a674f42fafbe707369223087b973ab59771113445824dd6a8ef 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pJq 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pJq 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.pJq 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79915f2309ec5e7f3ec63d34f51a30e12c086e4ac6294d19 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.u0h 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79915f2309ec5e7f3ec63d34f51a30e12c086e4ac6294d19 0 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79915f2309ec5e7f3ec63d34f51a30e12c086e4ac6294d19 0 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79915f2309ec5e7f3ec63d34f51a30e12c086e4ac6294d19 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.u0h 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.u0h 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.u0h 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0f14cfca4590abe35bc43e57ec6228792f256b370d47e875 00:26:47.207 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aYC 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0f14cfca4590abe35bc43e57ec6228792f256b370d47e875 2 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0f14cfca4590abe35bc43e57ec6228792f256b370d47e875 2 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0f14cfca4590abe35bc43e57ec6228792f256b370d47e875 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.469 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aYC 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aYC 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aYC 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=669632d9662be7716bdc3cb34600cde6 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5PP 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 669632d9662be7716bdc3cb34600cde6 1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 669632d9662be7716bdc3cb34600cde6 1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=669632d9662be7716bdc3cb34600cde6 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5PP 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5PP 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.5PP 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=79aed58dd859f0882060ef40dbf59103 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ufL 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 79aed58dd859f0882060ef40dbf59103 1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 79aed58dd859f0882060ef40dbf59103 1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=79aed58dd859f0882060ef40dbf59103 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ufL 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ufL 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ufL 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8cc60fc673a98b819213731a765b959b72ae6f858d9669fc 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TES 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8cc60fc673a98b819213731a765b959b72ae6f858d9669fc 2 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8cc60fc673a98b819213731a765b959b72ae6f858d9669fc 2 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8cc60fc673a98b819213731a765b959b72ae6f858d9669fc 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TES 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TES 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.TES 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:47.470 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b9a84501c486f389001e52ec5d20fd54 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DKF 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b9a84501c486f389001e52ec5d20fd54 0 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b9a84501c486f389001e52ec5d20fd54 0 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b9a84501c486f389001e52ec5d20fd54 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DKF 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DKF 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.DKF 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=111db5331e44364e2b4584d6c6f6c9bf1df5ba33dc8685d38ed5b11b5be6bcfb 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ugO 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 111db5331e44364e2b4584d6c6f6c9bf1df5ba33dc8685d38ed5b11b5be6bcfb 3 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 111db5331e44364e2b4584d6c6f6c9bf1df5ba33dc8685d38ed5b11b5be6bcfb 3 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=111db5331e44364e2b4584d6c6f6c9bf1df5ba33dc8685d38ed5b11b5be6bcfb 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ugO 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ugO 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ugO 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2880044 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2880044 ']' 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.731 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.732 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.732 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.732 14:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.993 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.993 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:47.993 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ceX 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.pJq ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.pJq 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.u0h 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aYC ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aYC 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.5PP 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ufL ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ufL 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TES 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.DKF ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.DKF 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ugO 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:47.994 14:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:51.293 Waiting for block devices as requested 00:26:51.554 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:51.554 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:51.554 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:51.814 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:51.814 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:51.814 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:51.814 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:52.074 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:52.074 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:52.334 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:52.334 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:52.334 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:52.334 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:52.594 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:52.594 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:52.595 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:52.854 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:53.425 No valid GPT data, bailing 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:53.425 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:53.426 00:26:53.426 Discovery Log Number of Records 2, Generation counter 2 00:26:53.426 =====Discovery Log Entry 0====== 00:26:53.426 trtype: tcp 00:26:53.426 adrfam: ipv4 00:26:53.426 subtype: current discovery subsystem 00:26:53.426 treq: not specified, sq flow control disable supported 00:26:53.426 portid: 1 00:26:53.426 trsvcid: 4420 00:26:53.426 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:53.426 traddr: 10.0.0.1 00:26:53.426 eflags: none 00:26:53.426 sectype: none 00:26:53.426 =====Discovery Log Entry 1====== 00:26:53.426 trtype: tcp 00:26:53.426 adrfam: ipv4 00:26:53.426 subtype: nvme subsystem 00:26:53.426 treq: not specified, sq flow control disable supported 00:26:53.426 portid: 1 00:26:53.426 trsvcid: 4420 00:26:53.426 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:53.426 traddr: 10.0.0.1 00:26:53.426 eflags: none 00:26:53.426 sectype: none 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.426 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.686 nvme0n1 00:26:53.686 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.686 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.686 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.686 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.687 14:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.947 nvme0n1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.947 nvme0n1 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.947 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.208 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.209 nvme0n1 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.209 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.470 nvme0n1 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.470 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.471 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.471 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.471 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.471 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.732 nvme0n1 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:26:54.732 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.733 14:17:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.993 nvme0n1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.993 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.255 nvme0n1 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.255 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.516 nvme0n1 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.516 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.777 nvme0n1 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.777 14:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.038 nvme0n1 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:56.038 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.039 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.300 nvme0n1 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:56.300 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.301 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.561 nvme0n1 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.561 14:17:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.821 nvme0n1 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.821 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.082 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.342 nvme0n1 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.342 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.603 nvme0n1 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.603 14:17:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.175 nvme0n1 00:26:58.175 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.175 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.175 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.175 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.176 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 nvme0n1 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.437 14:17:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 nvme0n1 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.007 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.576 nvme0n1 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.576 14:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.836 nvme0n1 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.836 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.791 nvme0n1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.791 14:17:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.364 nvme0n1 00:27:01.364 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.364 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.364 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.365 14:17:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.938 nvme0n1 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.938 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.879 nvme0n1 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.879 14:17:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.451 nvme0n1 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.451 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.452 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.712 nvme0n1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.712 nvme0n1 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.712 14:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.972 nvme0n1 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.972 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.233 nvme0n1 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.233 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.234 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 nvme0n1 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.494 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.754 nvme0n1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.754 14:17:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.014 nvme0n1 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.014 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.015 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.275 nvme0n1 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.275 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.535 nvme0n1 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.535 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.795 nvme0n1 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:05.795 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.796 14:17:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.060 nvme0n1 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.060 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.061 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.359 nvme0n1 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.359 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 nvme0n1 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.678 14:17:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.959 nvme0n1 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.959 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.219 nvme0n1 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.219 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:07.480 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.481 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.741 nvme0n1 00:27:07.741 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.741 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.741 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.741 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.741 14:17:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.741 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.741 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.741 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.741 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.741 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.002 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.262 nvme0n1 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.262 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.263 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.833 nvme0n1 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.833 14:17:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.833 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.401 nvme0n1 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.401 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.402 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 nvme0n1 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.661 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.933 14:17:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.501 nvme0n1 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:10.501 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.502 14:17:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.069 nvme0n1 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.069 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.328 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.329 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.329 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.329 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.898 nvme0n1 00:27:11.898 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.898 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.898 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.898 14:17:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.898 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.466 nvme0n1 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.466 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.725 14:17:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.294 nvme0n1 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.294 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.554 nvme0n1 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.554 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.555 nvme0n1 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.555 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.815 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.816 14:17:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.816 nvme0n1 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.816 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.076 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.077 nvme0n1 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.077 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.338 nvme0n1 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.338 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.601 nvme0n1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.601 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.863 nvme0n1 00:27:14.863 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.863 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.863 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.863 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.863 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.863 14:17:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:14.863 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.864 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.127 nvme0n1 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.127 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.389 nvme0n1 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.389 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.650 nvme0n1 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.650 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.651 14:17:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.911 nvme0n1 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:15.911 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:15.912 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.912 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.912 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.172 nvme0n1 00:27:16.172 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.172 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.172 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.172 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.172 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.173 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.434 nvme0n1 00:27:16.434 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.695 14:17:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.956 nvme0n1 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.956 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.216 nvme0n1 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.216 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.788 nvme0n1 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.788 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.789 14:17:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.049 nvme0n1 00:27:18.049 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.310 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.572 nvme0n1 00:27:18.572 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.572 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.572 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.572 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.572 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.572 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.833 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.834 14:17:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.095 nvme0n1 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.095 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.096 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.667 nvme0n1 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNlZGFhMzRkMGVhMGFlYTMxOWY4NDJkOTJiNDY1OWGyB5B6: 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNlNGY3YWIyZWEzNmE2NzRmNDJmYWZiZTcwNzM2OTIyMzA4N2I5NzNhYjU5NzcxMTEzNDQ1ODI0ZGQ2YThlZkLpHok=: 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.667 14:17:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.237 nvme0n1 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.237 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.238 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:20.238 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.499 14:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.072 nvme0n1 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.072 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.073 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.644 nvme0n1 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.644 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjNjBmYzY3M2E5OGI4MTkyMTM3MzFhNzY1Yjk1OWI3MmFlNmY4NThkOTY2OWZjt9siNA==: 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: ]] 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjlhODQ1MDFjNDg2ZjM4OTAwMWU1MmVjNWQyMGZkNTTalwhl: 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.906 14:17:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.479 nvme0n1 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTExZGI1MzMxZTQ0MzY0ZTJiNDU4NGQ2YzZmNmM5YmYxZGY1YmEzM2RjODY4NWQzOGVkNWIxMWI1YmU2YmNmYvpZ1fs=: 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.479 14:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.050 nvme0n1 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.050 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.311 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 request: 00:27:23.311 { 00:27:23.311 "name": "nvme0", 00:27:23.311 "trtype": "tcp", 00:27:23.311 "traddr": "10.0.0.1", 00:27:23.311 "adrfam": "ipv4", 00:27:23.311 "trsvcid": "4420", 00:27:23.311 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:23.311 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:23.311 "prchk_reftag": false, 00:27:23.311 "prchk_guard": false, 00:27:23.311 "hdgst": false, 00:27:23.311 "ddgst": false, 00:27:23.311 "allow_unrecognized_csi": false, 00:27:23.311 "method": "bdev_nvme_attach_controller", 00:27:23.311 "req_id": 1 00:27:23.311 } 00:27:23.311 Got JSON-RPC error response 00:27:23.311 response: 00:27:23.311 { 00:27:23.311 "code": -5, 00:27:23.312 "message": "Input/output error" 00:27:23.312 } 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.312 request: 00:27:23.312 { 00:27:23.312 "name": "nvme0", 00:27:23.312 "trtype": "tcp", 00:27:23.312 "traddr": "10.0.0.1", 00:27:23.312 "adrfam": "ipv4", 00:27:23.312 "trsvcid": "4420", 00:27:23.312 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:23.312 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:23.312 "prchk_reftag": false, 00:27:23.312 "prchk_guard": false, 00:27:23.312 "hdgst": false, 00:27:23.312 "ddgst": false, 00:27:23.312 "dhchap_key": "key2", 00:27:23.312 "allow_unrecognized_csi": false, 00:27:23.312 "method": "bdev_nvme_attach_controller", 00:27:23.312 "req_id": 1 00:27:23.312 } 00:27:23.312 Got JSON-RPC error response 00:27:23.312 response: 00:27:23.312 { 00:27:23.312 "code": -5, 00:27:23.312 "message": "Input/output error" 00:27:23.312 } 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.312 request: 00:27:23.312 { 00:27:23.312 "name": "nvme0", 00:27:23.312 "trtype": "tcp", 00:27:23.312 "traddr": "10.0.0.1", 00:27:23.312 "adrfam": "ipv4", 00:27:23.312 "trsvcid": "4420", 00:27:23.312 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:23.312 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:23.312 "prchk_reftag": false, 00:27:23.312 "prchk_guard": false, 00:27:23.312 "hdgst": false, 00:27:23.312 "ddgst": false, 00:27:23.312 "dhchap_key": "key1", 00:27:23.312 "dhchap_ctrlr_key": "ckey2", 00:27:23.312 "allow_unrecognized_csi": false, 00:27:23.312 "method": "bdev_nvme_attach_controller", 00:27:23.312 "req_id": 1 00:27:23.312 } 00:27:23.312 Got JSON-RPC error response 00:27:23.312 response: 00:27:23.312 { 00:27:23.312 "code": -5, 00:27:23.312 "message": "Input/output error" 00:27:23.312 } 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.312 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.573 nvme0n1 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:23.573 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:23.574 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:23.574 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.574 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.834 request: 00:27:23.834 { 00:27:23.834 "name": "nvme0", 00:27:23.834 "dhchap_key": "key1", 00:27:23.834 "dhchap_ctrlr_key": "ckey2", 00:27:23.834 "method": "bdev_nvme_set_keys", 00:27:23.834 "req_id": 1 00:27:23.834 } 00:27:23.834 Got JSON-RPC error response 00:27:23.834 response: 00:27:23.834 { 00:27:23.834 "code": -5, 00:27:23.834 "message": "Input/output error" 00:27:23.834 } 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:23.834 14:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.778 14:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Nzk5MTVmMjMwOWVjNWU3ZjNlYzYzZDM0ZjUxYTMwZTEyYzA4NmU0YWM2Mjk0ZDE5zQi6Gg==: 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: ]] 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGYxNGNmY2E0NTkwYWJlMzViYzQzZTU3ZWM2MjI4NzkyZjI1NmIzNzBkNDdlODc1pAiHFg==: 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.040 nvme0n1 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5NjMyZDk2NjJiZTc3MTZiZGMzY2IzNDYwMGNkZTZVVVsk: 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: ]] 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzlhZWQ1OGRkODU5ZjA4ODIwNjBlZjQwZGJmNTkxMDMOdQty: 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.040 request: 00:27:25.040 { 00:27:25.040 "name": "nvme0", 00:27:25.040 "dhchap_key": "key2", 00:27:25.040 "dhchap_ctrlr_key": "ckey1", 00:27:25.040 "method": "bdev_nvme_set_keys", 00:27:25.040 "req_id": 1 00:27:25.040 } 00:27:25.040 Got JSON-RPC error response 00:27:25.040 response: 00:27:25.040 { 00:27:25.040 "code": -13, 00:27:25.040 "message": "Permission denied" 00:27:25.040 } 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:25.040 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:25.041 14:17:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:25.983 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.983 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:25.983 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.983 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:26.244 rmmod nvme_tcp 00:27:26.244 rmmod nvme_fabrics 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2880044 ']' 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2880044 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2880044 ']' 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2880044 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2880044 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2880044' 00:27:26.244 killing process with pid 2880044 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2880044 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2880044 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:26.244 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:26.505 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:26.505 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:26.505 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.506 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.506 14:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:28.419 14:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:32.632 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:32.632 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:32.632 14:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ceX /tmp/spdk.key-null.u0h /tmp/spdk.key-sha256.5PP /tmp/spdk.key-sha384.TES /tmp/spdk.key-sha512.ugO /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:32.632 14:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:35.927 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:35.927 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:35.927 00:27:35.927 real 0m57.397s 00:27:35.927 user 0m51.493s 00:27:35.927 sys 0m15.442s 00:27:35.927 14:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.927 14:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.927 ************************************ 00:27:35.927 END TEST nvmf_auth_host 00:27:35.927 ************************************ 00:27:35.927 14:17:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.928 ************************************ 00:27:35.928 START TEST nvmf_digest 00:27:35.928 ************************************ 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:35.928 * Looking for test storage... 00:27:35.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:35.928 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:36.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.189 --rc genhtml_branch_coverage=1 00:27:36.189 --rc genhtml_function_coverage=1 00:27:36.189 --rc genhtml_legend=1 00:27:36.189 --rc geninfo_all_blocks=1 00:27:36.189 --rc geninfo_unexecuted_blocks=1 00:27:36.189 00:27:36.189 ' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:36.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.189 --rc genhtml_branch_coverage=1 00:27:36.189 --rc genhtml_function_coverage=1 00:27:36.189 --rc genhtml_legend=1 00:27:36.189 --rc geninfo_all_blocks=1 00:27:36.189 --rc geninfo_unexecuted_blocks=1 00:27:36.189 00:27:36.189 ' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:36.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.189 --rc genhtml_branch_coverage=1 00:27:36.189 --rc genhtml_function_coverage=1 00:27:36.189 --rc genhtml_legend=1 00:27:36.189 --rc geninfo_all_blocks=1 00:27:36.189 --rc geninfo_unexecuted_blocks=1 00:27:36.189 00:27:36.189 ' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:36.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:36.189 --rc genhtml_branch_coverage=1 00:27:36.189 --rc genhtml_function_coverage=1 00:27:36.189 --rc genhtml_legend=1 00:27:36.189 --rc geninfo_all_blocks=1 00:27:36.189 --rc geninfo_unexecuted_blocks=1 00:27:36.189 00:27:36.189 ' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.189 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:36.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:36.190 14:17:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.337 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:44.338 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:44.338 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:44.338 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:44.338 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:44.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:27:44.338 00:27:44.338 --- 10.0.0.2 ping statistics --- 00:27:44.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.338 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:27:44.338 00:27:44.338 --- 10.0.0.1 ping statistics --- 00:27:44.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.338 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.338 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:44.338 ************************************ 00:27:44.338 START TEST nvmf_digest_clean 00:27:44.339 ************************************ 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2896325 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2896325 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2896325 ']' 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.339 14:17:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.339 [2024-12-05 14:17:49.885055] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:27:44.339 [2024-12-05 14:17:49.885118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.339 [2024-12-05 14:17:49.984921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.339 [2024-12-05 14:17:50.041077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.339 [2024-12-05 14:17:50.041131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.339 [2024-12-05 14:17:50.041140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.339 [2024-12-05 14:17:50.041147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.339 [2024-12-05 14:17:50.041153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.339 [2024-12-05 14:17:50.041867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.600 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.601 null0 00:27:44.601 [2024-12-05 14:17:50.835414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.601 [2024-12-05 14:17:50.859722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2896668 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2896668 /var/tmp/bperf.sock 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2896668 ']' 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:44.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.601 14:17:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:44.862 [2024-12-05 14:17:50.918752] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:27:44.862 [2024-12-05 14:17:50.918815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896668 ] 00:27:44.862 [2024-12-05 14:17:51.012402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.862 [2024-12-05 14:17:51.064252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.802 14:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.802 14:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:45.802 14:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:45.802 14:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:45.802 14:17:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:45.802 14:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:45.802 14:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.373 nvme0n1 00:27:46.373 14:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:46.373 14:17:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.373 Running I/O for 2 seconds... 00:27:48.257 18508.00 IOPS, 72.30 MiB/s [2024-12-05T13:17:54.557Z] 20225.00 IOPS, 79.00 MiB/s 00:27:48.257 Latency(us) 00:27:48.257 [2024-12-05T13:17:54.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.257 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:48.257 nvme0n1 : 2.00 20252.97 79.11 0.00 0.00 6313.48 2402.99 18022.40 00:27:48.257 [2024-12-05T13:17:54.557Z] =================================================================================================================== 00:27:48.257 [2024-12-05T13:17:54.557Z] Total : 20252.97 79.11 0.00 0.00 6313.48 2402.99 18022.40 00:27:48.257 { 00:27:48.257 "results": [ 00:27:48.257 { 00:27:48.257 "job": "nvme0n1", 00:27:48.257 "core_mask": "0x2", 00:27:48.257 "workload": "randread", 00:27:48.257 "status": "finished", 00:27:48.257 "queue_depth": 128, 00:27:48.257 "io_size": 4096, 00:27:48.257 "runtime": 2.003558, 00:27:48.257 "iops": 20252.969966429722, 00:27:48.257 "mibps": 79.1131639313661, 00:27:48.257 "io_failed": 0, 00:27:48.257 "io_timeout": 0, 00:27:48.257 "avg_latency_us": 6313.4801005470945, 00:27:48.257 "min_latency_us": 2402.9866666666667, 00:27:48.257 "max_latency_us": 18022.4 00:27:48.257 } 00:27:48.257 ], 00:27:48.257 "core_count": 1 00:27:48.257 } 00:27:48.257 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:48.257 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:48.257 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:48.257 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:48.257 | select(.opcode=="crc32c") 00:27:48.257 | "\(.module_name) \(.executed)"' 00:27:48.257 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2896668 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2896668 ']' 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2896668 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896668 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896668' 00:27:48.518 killing process with pid 2896668 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2896668 00:27:48.518 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.518 00:27:48.518 Latency(us) 00:27:48.518 [2024-12-05T13:17:54.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.518 [2024-12-05T13:17:54.818Z] =================================================================================================================== 00:27:48.518 [2024-12-05T13:17:54.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.518 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2896668 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2897350 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2897350 /var/tmp/bperf.sock 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2897350 ']' 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:48.779 14:17:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:48.779 [2024-12-05 14:17:54.952950] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:27:48.779 [2024-12-05 14:17:54.953006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897350 ] 00:27:48.779 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.779 Zero copy mechanism will not be used. 00:27:48.779 [2024-12-05 14:17:55.038805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.779 [2024-12-05 14:17:55.067797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.732 14:17:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.301 nvme0n1 00:27:50.301 14:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:50.301 14:17:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:50.301 Zero copy mechanism will not be used. 00:27:50.301 Running I/O for 2 seconds... 00:27:52.625 2917.00 IOPS, 364.62 MiB/s [2024-12-05T13:17:58.925Z] 2936.50 IOPS, 367.06 MiB/s 00:27:52.625 Latency(us) 00:27:52.625 [2024-12-05T13:17:58.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.625 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:52.625 nvme0n1 : 2.00 2941.62 367.70 0.00 0.00 5436.63 723.63 13161.81 00:27:52.625 [2024-12-05T13:17:58.925Z] =================================================================================================================== 00:27:52.625 [2024-12-05T13:17:58.925Z] Total : 2941.62 367.70 0.00 0.00 5436.63 723.63 13161.81 00:27:52.625 { 00:27:52.625 "results": [ 00:27:52.625 { 00:27:52.625 "job": "nvme0n1", 00:27:52.625 "core_mask": "0x2", 00:27:52.625 "workload": "randread", 00:27:52.625 "status": "finished", 00:27:52.625 "queue_depth": 16, 00:27:52.625 "io_size": 131072, 00:27:52.625 "runtime": 2.001956, 00:27:52.625 "iops": 2941.623092615422, 00:27:52.625 "mibps": 367.70288657692777, 00:27:52.625 "io_failed": 0, 00:27:52.625 "io_timeout": 0, 00:27:52.625 "avg_latency_us": 5436.625645553858, 00:27:52.625 "min_latency_us": 723.6266666666667, 00:27:52.625 "max_latency_us": 13161.813333333334 00:27:52.625 } 00:27:52.625 ], 00:27:52.625 "core_count": 1 00:27:52.625 } 00:27:52.625 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:52.626 | select(.opcode=="crc32c") 00:27:52.626 | "\(.module_name) \(.executed)"' 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2897350 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2897350 ']' 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2897350 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897350 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897350' 00:27:52.626 killing process with pid 2897350 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2897350 00:27:52.626 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.626 00:27:52.626 Latency(us) 00:27:52.626 [2024-12-05T13:17:58.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.626 [2024-12-05T13:17:58.926Z] =================================================================================================================== 00:27:52.626 [2024-12-05T13:17:58.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2897350 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2898051 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2898051 /var/tmp/bperf.sock 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2898051 ']' 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.626 14:17:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.886 [2024-12-05 14:17:58.961348] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:27:52.886 [2024-12-05 14:17:58.961405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898051 ] 00:27:52.886 [2024-12-05 14:17:59.043638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.886 [2024-12-05 14:17:59.073145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.457 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.457 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:53.457 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:53.457 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:53.457 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:53.717 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.717 14:17:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:54.288 nvme0n1 00:27:54.288 14:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:54.288 14:18:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:54.288 Running I/O for 2 seconds... 00:27:56.250 28885.00 IOPS, 112.83 MiB/s [2024-12-05T13:18:02.550Z] 29042.50 IOPS, 113.45 MiB/s 00:27:56.250 Latency(us) 00:27:56.250 [2024-12-05T13:18:02.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.250 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:56.250 nvme0n1 : 2.00 29044.37 113.45 0.00 0.00 4399.75 3399.68 12997.97 00:27:56.250 [2024-12-05T13:18:02.550Z] =================================================================================================================== 00:27:56.250 [2024-12-05T13:18:02.550Z] Total : 29044.37 113.45 0.00 0.00 4399.75 3399.68 12997.97 00:27:56.250 { 00:27:56.250 "results": [ 00:27:56.250 { 00:27:56.250 "job": "nvme0n1", 00:27:56.250 "core_mask": "0x2", 00:27:56.250 "workload": "randwrite", 00:27:56.250 "status": "finished", 00:27:56.250 "queue_depth": 128, 00:27:56.250 "io_size": 4096, 00:27:56.250 "runtime": 2.004278, 00:27:56.250 "iops": 29044.37408383468, 00:27:56.250 "mibps": 113.45458626497921, 00:27:56.250 "io_failed": 0, 00:27:56.250 "io_timeout": 0, 00:27:56.250 "avg_latency_us": 4399.75494500083, 00:27:56.250 "min_latency_us": 3399.68, 00:27:56.250 "max_latency_us": 12997.973333333333 00:27:56.250 } 00:27:56.250 ], 00:27:56.250 "core_count": 1 00:27:56.250 } 00:27:56.250 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:56.250 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:56.250 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:56.250 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:56.250 | select(.opcode=="crc32c") 00:27:56.250 | "\(.module_name) \(.executed)"' 00:27:56.250 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2898051 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2898051 ']' 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2898051 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898051 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898051' 00:27:56.512 killing process with pid 2898051 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2898051 00:27:56.512 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.512 00:27:56.512 Latency(us) 00:27:56.512 [2024-12-05T13:18:02.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.512 [2024-12-05T13:18:02.812Z] =================================================================================================================== 00:27:56.512 [2024-12-05T13:18:02.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2898051 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2898984 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2898984 /var/tmp/bperf.sock 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2898984 ']' 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:56.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.512 14:18:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.773 [2024-12-05 14:18:02.836409] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:27:56.773 [2024-12-05 14:18:02.836490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898984 ] 00:27:56.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:56.773 Zero copy mechanism will not be used. 00:27:56.773 [2024-12-05 14:18:02.924068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.773 [2024-12-05 14:18:02.953762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.343 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.343 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:57.343 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:57.343 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:57.343 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:57.604 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.604 14:18:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:58.176 nvme0n1 00:27:58.176 14:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:58.176 14:18:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:58.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:58.176 Zero copy mechanism will not be used. 00:27:58.176 Running I/O for 2 seconds... 00:28:00.056 3345.00 IOPS, 418.12 MiB/s [2024-12-05T13:18:06.356Z] 3412.00 IOPS, 426.50 MiB/s 00:28:00.056 Latency(us) 00:28:00.056 [2024-12-05T13:18:06.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.056 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:00.056 nvme0n1 : 2.00 3416.03 427.00 0.00 0.00 4678.28 1242.45 14199.47 00:28:00.056 [2024-12-05T13:18:06.356Z] =================================================================================================================== 00:28:00.056 [2024-12-05T13:18:06.356Z] Total : 3416.03 427.00 0.00 0.00 4678.28 1242.45 14199.47 00:28:00.056 { 00:28:00.056 "results": [ 00:28:00.056 { 00:28:00.056 "job": "nvme0n1", 00:28:00.056 "core_mask": "0x2", 00:28:00.056 "workload": "randwrite", 00:28:00.056 "status": "finished", 00:28:00.056 "queue_depth": 16, 00:28:00.056 "io_size": 131072, 00:28:00.056 "runtime": 2.003494, 00:28:00.056 "iops": 3416.0321917609936, 00:28:00.056 "mibps": 427.0040239701242, 00:28:00.056 "io_failed": 0, 00:28:00.056 "io_timeout": 0, 00:28:00.056 "avg_latency_us": 4678.278558347944, 00:28:00.056 "min_latency_us": 1242.4533333333334, 00:28:00.056 "max_latency_us": 14199.466666666667 00:28:00.056 } 00:28:00.056 ], 00:28:00.056 "core_count": 1 00:28:00.056 } 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:00.316 | select(.opcode=="crc32c") 00:28:00.316 | "\(.module_name) \(.executed)"' 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2898984 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2898984 ']' 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2898984 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898984 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898984' 00:28:00.316 killing process with pid 2898984 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2898984 00:28:00.316 Received shutdown signal, test time was about 2.000000 seconds 00:28:00.316 00:28:00.316 Latency(us) 00:28:00.316 [2024-12-05T13:18:06.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.316 [2024-12-05T13:18:06.616Z] =================================================================================================================== 00:28:00.316 [2024-12-05T13:18:06.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.316 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2898984 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2896325 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2896325 ']' 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2896325 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896325 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896325' 00:28:00.576 killing process with pid 2896325 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2896325 00:28:00.576 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2896325 00:28:00.835 00:28:00.835 real 0m17.070s 00:28:00.835 user 0m33.952s 00:28:00.835 sys 0m3.575s 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:00.835 ************************************ 00:28:00.835 END TEST nvmf_digest_clean 00:28:00.835 ************************************ 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:00.835 ************************************ 00:28:00.835 START TEST nvmf_digest_error 00:28:00.835 ************************************ 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2899865 00:28:00.835 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2899865 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2899865 ']' 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.836 14:18:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:00.836 [2024-12-05 14:18:07.025375] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:00.836 [2024-12-05 14:18:07.025428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.836 [2024-12-05 14:18:07.118071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.095 [2024-12-05 14:18:07.149683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.095 [2024-12-05 14:18:07.149711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.095 [2024-12-05 14:18:07.149716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.095 [2024-12-05 14:18:07.149721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.095 [2024-12-05 14:18:07.149726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.095 [2024-12-05 14:18:07.150208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.666 [2024-12-05 14:18:07.856137] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.666 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.666 null0 00:28:01.666 [2024-12-05 14:18:07.934970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.666 [2024-12-05 14:18:07.959150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2900006 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2900006 /var/tmp/bperf.sock 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2900006 ']' 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.927 14:18:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.927 [2024-12-05 14:18:08.016701] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:01.927 [2024-12-05 14:18:08.016748] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900006 ] 00:28:01.927 [2024-12-05 14:18:08.097541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.927 [2024-12-05 14:18:08.127435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.864 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.865 14:18:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.125 nvme0n1 00:28:03.125 14:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:03.125 14:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.125 14:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:03.125 14:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.125 14:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:03.125 14:18:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.125 Running I/O for 2 seconds... 00:28:03.125 [2024-12-05 14:18:09.335611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.335642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.335651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.344591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.344611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.344618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.354816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.354834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.354841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.362705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.362723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.362730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.371532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.371551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.371558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.380493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.380516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.380523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.391405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.391423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.391430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.403467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.403485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.125 [2024-12-05 14:18:09.411304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.125 [2024-12-05 14:18:09.411321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-12-05 14:18:09.411327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.422689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.422707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.422713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.431321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.431338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.440104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.440122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.440128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.449880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.449898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.449905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.460109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.460126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.460139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.470933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.470950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.470957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.479086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.479103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.479109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.488440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.488463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.488470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.497843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.497861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.497867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.507848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.507864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.507872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.515817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.515835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.515842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.525492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.525510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.525516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.534650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.534669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.534676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.543106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.543126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.543133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.551557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.551574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.551580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.561022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.561039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.561046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.569742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.569759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.569765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.578623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.578646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.587312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.587335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.596907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.596924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.596930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.606044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.606061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.606067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.614971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.614988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.385 [2024-12-05 14:18:09.614994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.385 [2024-12-05 14:18:09.627107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.385 [2024-12-05 14:18:09.627124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.627130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.386 [2024-12-05 14:18:09.635083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.386 [2024-12-05 14:18:09.635100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.635107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.386 [2024-12-05 14:18:09.644553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.386 [2024-12-05 14:18:09.644571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.644577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.386 [2024-12-05 14:18:09.653824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.386 [2024-12-05 14:18:09.653841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.653847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.386 [2024-12-05 14:18:09.663191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.386 [2024-12-05 14:18:09.663208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.663214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.386 [2024-12-05 14:18:09.672168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.386 [2024-12-05 14:18:09.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.672193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.386 [2024-12-05 14:18:09.681058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.386 [2024-12-05 14:18:09.681076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.386 [2024-12-05 14:18:09.681083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.646 [2024-12-05 14:18:09.689601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.646 [2024-12-05 14:18:09.689619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.646 [2024-12-05 14:18:09.689625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.646 [2024-12-05 14:18:09.698468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.646 [2024-12-05 14:18:09.698486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.646 [2024-12-05 14:18:09.698497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.646 [2024-12-05 14:18:09.707207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.646 [2024-12-05 14:18:09.707223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.646 [2024-12-05 14:18:09.707230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.646 [2024-12-05 14:18:09.716035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.716052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.716058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.725285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.725303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.725309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.734666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.734683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.734689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.743999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.744016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.744022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.752331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.752348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.752355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.762128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.762145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.762151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.771127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.771144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.771151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.779353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.779373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.779379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.788575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.788591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.788598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.797419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.797437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.797443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.807207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.807224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.807231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.814878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.814894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.814900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.825439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.825463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.825471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.834644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.834661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.834667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.843527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.843544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.843550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.851472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.851488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.851495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.860420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.860437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.860444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.869842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.869858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.869865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.879299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.879315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.879322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.888163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.888179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.888186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.896266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.896282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.896288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.906892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.906910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.906918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.916899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.916916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.916922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.925406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.925422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.925428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.934304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.647 [2024-12-05 14:18:09.934324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.647 [2024-12-05 14:18:09.934330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.647 [2024-12-05 14:18:09.942607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.648 [2024-12-05 14:18:09.942625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.648 [2024-12-05 14:18:09.942632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:09.952220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:09.952237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:09.952243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:09.960787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:09.960804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:09.960811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:09.969244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:09.969261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:09.969268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:09.979689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:09.979706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:09.979712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:09.989564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:09.989581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:09.989588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:09.998807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:09.998824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:09.998831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:10.008900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:10.008918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:10.008925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:10.017822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:10.017841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:10.017848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:10.026921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:10.026938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:10.026945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:10.038645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.908 [2024-12-05 14:18:10.038662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-12-05 14:18:10.038669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-12-05 14:18:10.050007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.050024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.050033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.058326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.058343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.058350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.069358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.069375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.069382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.078971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.078989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.078995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.088190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.088207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.088213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.095839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.095856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.095867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.105585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.105602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.105609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.116416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.116433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.116440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.124301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.124318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.124324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.134373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.134390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.134396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.144211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.144228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.144235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.154765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.154782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.154788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.163862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.163879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.163886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.171021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.171038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.171045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.180743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.180765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.180772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.190356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.190373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.190379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.909 [2024-12-05 14:18:10.198943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:03.909 [2024-12-05 14:18:10.198959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-12-05 14:18:10.198966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.208321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.208339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.208347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.216525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.216542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.216549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.224665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.224682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.224689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.234187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.234204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.234211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.243426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.243442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.243448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.252785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.252801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.252808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.260803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.260819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.260825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.269816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.269833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.269839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.278734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.278750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.278756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.287662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.287678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.287684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.170 [2024-12-05 14:18:10.296230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.170 [2024-12-05 14:18:10.296247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.170 [2024-12-05 14:18:10.296253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.305890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.305906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.305912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.313959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.313975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.313982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 27429.00 IOPS, 107.14 MiB/s [2024-12-05T13:18:10.471Z] [2024-12-05 14:18:10.324296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.324314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.324320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.335944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.335964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.335971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.345301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.345318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.345324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.354384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.354401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.354407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.363789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.363806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.363813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.372845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.372861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.372868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.381654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.381671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.381677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.390217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.390234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.390241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.400257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.400274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.400281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.408043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.408060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.408067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.420654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.420671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.420678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.431658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.431675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.431681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.441600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.441618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.441624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.449672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.449689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.449695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.171 [2024-12-05 14:18:10.458438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.171 [2024-12-05 14:18:10.458459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.171 [2024-12-05 14:18:10.458466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.467688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.467704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.467710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.476365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.476382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.476388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.485357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.485374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.485380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.494886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.494903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.494914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.503739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.503755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.503762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.511878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.511895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.511902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.520739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.520756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.520763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.530913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.530930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.530938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.538637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.538654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.538661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.547384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.547401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.547407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.556764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.556781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.556789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.566096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.566113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.566119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.574521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.574541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.574547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.584866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.584883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.584889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.597024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.597040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.597046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.606040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.606056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.432 [2024-12-05 14:18:10.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.432 [2024-12-05 14:18:10.614765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.432 [2024-12-05 14:18:10.614782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.614789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.624476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.624493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.624499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.632506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.632523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.632529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.641734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.641750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.641756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.651068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.651085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.651091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.659809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.659826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.659832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.668250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.668267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.668274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.678280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.678296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.678302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.690091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.690108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.690114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.699471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.699487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.699494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.707718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.707734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.707740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.718272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.718289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.718295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.433 [2024-12-05 14:18:10.727276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.433 [2024-12-05 14:18:10.727293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.433 [2024-12-05 14:18:10.727299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.694 [2024-12-05 14:18:10.736280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.694 [2024-12-05 14:18:10.736296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.694 [2024-12-05 14:18:10.736306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.694 [2024-12-05 14:18:10.744564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.694 [2024-12-05 14:18:10.744581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.694 [2024-12-05 14:18:10.744587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.694 [2024-12-05 14:18:10.753144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.694 [2024-12-05 14:18:10.753161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.694 [2024-12-05 14:18:10.753167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.694 [2024-12-05 14:18:10.761869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.694 [2024-12-05 14:18:10.761886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.694 [2024-12-05 14:18:10.761892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.694 [2024-12-05 14:18:10.770975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.694 [2024-12-05 14:18:10.770991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.694 [2024-12-05 14:18:10.770997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.694 [2024-12-05 14:18:10.779817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.779834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.789608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.789624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.789631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.797190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.797206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.797213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.806724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.806741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.806748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.816221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.816237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.816243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.826290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.826306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.826312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.833803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.833820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.833826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.843113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.843130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.843136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.852193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.852210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.852216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.861388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.861404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.861410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.870487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.870504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.870512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.880065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.880081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.880088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.887704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.887721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.887731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.897784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.897800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.908322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.908340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.908347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.918269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.918285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.918291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.926708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.926725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.926731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.937774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.937791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.937798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.948463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.948481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.948487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.957359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.957376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.957383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.966598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.695 [2024-12-05 14:18:10.966615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.695 [2024-12-05 14:18:10.966621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.695 [2024-12-05 14:18:10.974213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.696 [2024-12-05 14:18:10.974233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.696 [2024-12-05 14:18:10.974239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.696 [2024-12-05 14:18:10.984709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.696 [2024-12-05 14:18:10.984726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.696 [2024-12-05 14:18:10.984732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:10.993481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:10.993498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:10.993505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.003282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.003298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.003305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.011958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.011976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.011984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.021480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.021496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.021503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.033595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.033613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.033620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.042944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.042960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.042966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.051725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.051741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.051748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.060369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.060386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.060393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.069741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.069758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.069764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.079002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.079019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.079025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.088195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.088211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.088218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.096747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.096764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.096770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.106709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.106726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.106732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.115810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.115826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.115833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.124098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.124115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.124122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.133761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.133777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.133787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.142756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.142773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.142779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.150343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.150360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.150366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.159662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.159678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.159685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.169080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.169096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.169102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.957 [2024-12-05 14:18:11.177300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.957 [2024-12-05 14:18:11.177317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.957 [2024-12-05 14:18:11.177324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.186406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.186423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.186429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.195044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.195061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.195067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.204578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.204595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.204602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.212656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.212672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.212679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.222346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.222363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.222369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.230651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.230669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.230676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.239764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.239780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.239787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.958 [2024-12-05 14:18:11.248925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:04.958 [2024-12-05 14:18:11.248941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.958 [2024-12-05 14:18:11.248947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.257763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.257781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.257787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.267257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.267273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.267279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.275984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.276001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.276007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.285302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.285318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.285329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.293645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.293662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.293668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.302444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.302465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.302471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.311373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.311390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.311397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 [2024-12-05 14:18:11.320577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x16d1190) 00:28:05.219 [2024-12-05 14:18:11.320594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:05.219 [2024-12-05 14:18:11.320600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:05.219 27581.00 IOPS, 107.74 MiB/s 00:28:05.219 Latency(us) 00:28:05.219 [2024-12-05T13:18:11.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.219 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:05.219 nvme0n1 : 2.00 27599.01 107.81 0.00 0.00 4633.04 2157.23 13598.72 00:28:05.219 [2024-12-05T13:18:11.520Z] =================================================================================================================== 00:28:05.220 [2024-12-05T13:18:11.520Z] Total : 27599.01 107.81 0.00 0.00 4633.04 2157.23 13598.72 00:28:05.220 { 00:28:05.220 "results": [ 00:28:05.220 { 00:28:05.220 "job": "nvme0n1", 00:28:05.220 "core_mask": "0x2", 00:28:05.220 "workload": "randread", 00:28:05.220 "status": "finished", 00:28:05.220 "queue_depth": 128, 00:28:05.220 "io_size": 4096, 00:28:05.220 "runtime": 2.003333, 00:28:05.220 "iops": 27599.00625607425, 00:28:05.220 "mibps": 107.80861818779005, 00:28:05.220 "io_failed": 0, 00:28:05.220 "io_timeout": 0, 00:28:05.220 "avg_latency_us": 4633.039877494423, 00:28:05.220 "min_latency_us": 2157.2266666666665, 00:28:05.220 "max_latency_us": 13598.72 00:28:05.220 } 00:28:05.220 ], 00:28:05.220 "core_count": 1 00:28:05.220 } 00:28:05.220 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:05.220 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:05.220 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:05.220 | .driver_specific 00:28:05.220 | .nvme_error 00:28:05.220 | .status_code 00:28:05.220 | .command_transient_transport_error' 00:28:05.220 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2900006 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2900006 ']' 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2900006 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900006 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900006' 00:28:05.481 killing process with pid 2900006 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2900006 00:28:05.481 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.481 00:28:05.481 Latency(us) 00:28:05.481 [2024-12-05T13:18:11.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.481 [2024-12-05T13:18:11.781Z] =================================================================================================================== 00:28:05.481 [2024-12-05T13:18:11.781Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2900006 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2901245 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2901245 /var/tmp/bperf.sock 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2901245 ']' 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.481 14:18:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.481 [2024-12-05 14:18:11.762769] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:05.481 [2024-12-05 14:18:11.762824] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901245 ] 00:28:05.481 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:05.481 Zero copy mechanism will not be used. 00:28:05.741 [2024-12-05 14:18:11.843985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.742 [2024-12-05 14:18:11.873803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.311 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.311 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:06.311 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.312 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.573 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:06.573 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.573 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.573 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.573 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.573 14:18:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.834 nvme0n1 00:28:06.834 14:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:06.834 14:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.834 14:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.834 14:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.834 14:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:06.834 14:18:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.096 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.096 Zero copy mechanism will not be used. 00:28:07.096 Running I/O for 2 seconds... 00:28:07.096 [2024-12-05 14:18:13.153961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.153994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.154004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.161772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.161797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.161804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.168932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.168952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.168959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.180010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.180034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.180041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.192133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.192152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.192159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.203956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.203976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.203982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.215690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.215709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.215715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.227700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.227719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.227726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.240031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.240049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.240056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.251889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.251908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.251914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.263562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.263580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.263587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.275703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.275721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.275727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.287823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.287842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.287848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.300493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.300511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.300518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.312738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.312756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.312763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.325178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.325197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.325203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.337439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.337462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.337469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.350237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.350256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.350263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.362955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.362973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.362980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.375008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.375027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.375034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.096 [2024-12-05 14:18:13.386875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.096 [2024-12-05 14:18:13.386893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.096 [2024-12-05 14:18:13.386903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.357 [2024-12-05 14:18:13.397958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.357 [2024-12-05 14:18:13.397977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.357 [2024-12-05 14:18:13.397983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.357 [2024-12-05 14:18:13.407986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.357 [2024-12-05 14:18:13.408004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.357 [2024-12-05 14:18:13.408010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.357 [2024-12-05 14:18:13.413367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.357 [2024-12-05 14:18:13.413385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.357 [2024-12-05 14:18:13.413391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.357 [2024-12-05 14:18:13.418365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.357 [2024-12-05 14:18:13.418383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.357 [2024-12-05 14:18:13.418390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.357 [2024-12-05 14:18:13.428477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.357 [2024-12-05 14:18:13.428495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.357 [2024-12-05 14:18:13.428501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.357 [2024-12-05 14:18:13.438608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.438626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.438632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.448517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.448535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.448541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.458759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.458777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.458783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.468800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.468818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.468824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.477740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.477757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.477764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.486318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.486336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.486342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.491732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.491750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.491756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.496104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.496123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.496130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.505581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.505599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.505606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.515222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.515241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.515247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.523100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.523119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.523125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.527704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.527722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.527734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.532053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.532072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.532078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.536386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.536404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.536410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.540936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.540954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.540960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.549856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.549874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.549881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.561819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.561838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.561844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.572233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.572251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.580310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.580328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.580334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.589668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.589687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.589693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.599486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.599508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.599514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.608299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.608317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.608323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.618213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.618232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.618238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.628175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.628193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.628199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.637097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.637115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.637121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.358 [2024-12-05 14:18:13.646624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.358 [2024-12-05 14:18:13.646643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.358 [2024-12-05 14:18:13.646649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.655432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.655450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.655463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.663758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.663776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.663783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.675037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.675056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.675063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.685094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.685113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.685119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.696409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.696428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.696434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.707302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.707321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.707327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.717727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.717746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.717752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.728323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.728341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.728347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.738432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.738450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.738462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.747974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.747991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.747997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.754116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.754134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.754140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.764488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.764506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.764516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.775188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.775206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.775213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.785788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.785806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.785812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.797225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.620 [2024-12-05 14:18:13.797243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.620 [2024-12-05 14:18:13.797249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.620 [2024-12-05 14:18:13.809464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.809482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.809488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.821804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.821822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.821829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.834025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.834043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.834049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.845897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.845915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.845921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.858165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.858183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.858190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.870768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.870789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.870795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.883362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.883380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.883386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.895762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.895780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.895786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.621 [2024-12-05 14:18:13.908416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.621 [2024-12-05 14:18:13.908434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.621 [2024-12-05 14:18:13.908441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.920327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.920346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.920353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.932512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.932531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.932537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.944750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.944768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.944774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.955103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.955121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.955127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.967103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.967122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.967128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.978676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.978695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.978701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:13.990770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:13.990789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:13.990795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:14.001627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:14.001646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:14.001652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:14.013624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:14.013642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:14.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:14.025352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.882 [2024-12-05 14:18:14.025371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.882 [2024-12-05 14:18:14.025377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.882 [2024-12-05 14:18:14.035172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.035190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.035196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.047572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.047590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.047596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.058390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.058408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.058414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.068573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.068591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.068600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.079335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.079353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.079359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.089637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.089655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.089661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.100391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.100408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.100415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.112382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.112400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.112406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.122849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.122867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.122873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.132011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.132029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.132035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.142839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.142858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.142864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:07.883 2978.00 IOPS, 372.25 MiB/s [2024-12-05T13:18:14.183Z] [2024-12-05 14:18:14.154913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.154931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.154938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.166600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.166618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:07.883 [2024-12-05 14:18:14.176261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:07.883 [2024-12-05 14:18:14.176279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.883 [2024-12-05 14:18:14.176285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.183367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.183386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.183392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.191293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.191311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.191317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.203073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.203091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.203098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.215247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.215265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.215271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.227657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.227675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.227681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.236088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.236106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.236112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.243753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.243770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.243780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.250879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.250897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.250903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.260894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.260912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.260918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.271910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.271929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.271935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.281831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.281850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.281856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.293857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.293875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.293882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.306473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.306490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.306497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.318300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.318319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.318325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.330856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.330874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.330880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.338409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.338431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.338437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.343525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.343542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.343549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.351913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.351932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.351938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.361435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.361458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.361464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.371107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.371126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.371132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.378570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.378588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.378594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.390564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.390582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.390588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.401141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.401160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.401166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.410697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.410715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.410722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.421524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.421543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.145 [2024-12-05 14:18:14.421549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.145 [2024-12-05 14:18:14.433123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.145 [2024-12-05 14:18:14.433142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.146 [2024-12-05 14:18:14.433148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.146 [2024-12-05 14:18:14.440351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.146 [2024-12-05 14:18:14.440369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.146 [2024-12-05 14:18:14.440375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.452238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.452256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.452262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.463697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.463715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.463721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.475618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.475636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.475642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.486854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.486873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.486880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.498384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.498403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.498409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.510001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.510018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.510028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.519588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.519606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.519612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.530243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.530261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.530267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.540786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.540804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.540810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.550871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.550890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.550896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.561568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.561586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.561592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.569946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.569963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.569969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.579717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.579735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.579742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.588961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.588978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.588985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.600213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.600234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.600240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.610355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.610372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.610378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.620726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.620744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.620750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.632048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.632066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.632072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.640016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.407 [2024-12-05 14:18:14.640035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.407 [2024-12-05 14:18:14.640041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.407 [2024-12-05 14:18:14.650709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.408 [2024-12-05 14:18:14.650727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.408 [2024-12-05 14:18:14.650733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.408 [2024-12-05 14:18:14.662002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.408 [2024-12-05 14:18:14.662019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.408 [2024-12-05 14:18:14.662026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.408 [2024-12-05 14:18:14.672549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.408 [2024-12-05 14:18:14.672566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.408 [2024-12-05 14:18:14.672572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.408 [2024-12-05 14:18:14.684319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.408 [2024-12-05 14:18:14.684338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.408 [2024-12-05 14:18:14.684347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.408 [2024-12-05 14:18:14.693593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.408 [2024-12-05 14:18:14.693611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.408 [2024-12-05 14:18:14.693617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.704355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.704372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.704379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.714263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.714281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.714287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.725306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.725325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.725331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.737118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.737135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.737141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.749220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.749239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.749246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.761007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.761025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.761032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.772473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.772492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.772498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.783609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.783632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.783638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.795341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.795359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.795365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.806156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.806175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.817771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.817790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.817796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.828811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.828830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.828836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.839598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.839616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.839622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.850859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.850877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.850883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.862108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.862127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.862133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.873197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.873215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.873221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.882690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.882715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.891217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.891236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.891242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.900380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.900398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.900405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.912590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.912608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.912615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.924942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.924961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.924967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.936610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.936629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.936635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.947191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.669 [2024-12-05 14:18:14.947210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.669 [2024-12-05 14:18:14.947216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.669 [2024-12-05 14:18:14.959355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.670 [2024-12-05 14:18:14.959374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.670 [2024-12-05 14:18:14.959381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:14.971085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:14.971103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:14.971113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:14.983751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:14.983769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:14.983775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:14.996200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:14.996218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:14.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.008334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.008352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.008358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.019550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.019568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.019574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.031164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.031181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.031188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.043257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.043275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.043282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.052426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.052444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.052450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.057382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.057400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.057407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.066585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.066606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.066612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.075075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.075092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.075100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.081216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.081233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.081239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.084576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.084593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.084599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.093562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.093579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.093585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.104516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.104533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.104540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.114511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.114528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.114534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.124864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.124881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.124887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.131512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.131529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.137953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.137970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.137977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:08.938 [2024-12-05 14:18:15.148899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x150c570) 00:28:08.938 [2024-12-05 14:18:15.148916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.938 [2024-12-05 14:18:15.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:08.938 3007.00 IOPS, 375.88 MiB/s 00:28:08.938 Latency(us) 00:28:08.938 [2024-12-05T13:18:15.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:08.938 nvme0n1 : 2.00 3009.90 376.24 0.00 0.00 5313.50 535.89 12997.97 00:28:08.938 [2024-12-05T13:18:15.238Z] =================================================================================================================== 00:28:08.938 [2024-12-05T13:18:15.238Z] Total : 3009.90 376.24 0.00 0.00 5313.50 535.89 12997.97 00:28:08.938 { 00:28:08.938 "results": [ 00:28:08.938 { 00:28:08.938 "job": "nvme0n1", 00:28:08.938 "core_mask": "0x2", 00:28:08.938 "workload": "randread", 00:28:08.938 "status": "finished", 00:28:08.938 "queue_depth": 16, 00:28:08.938 "io_size": 131072, 00:28:08.938 "runtime": 2.003391, 00:28:08.938 "iops": 3009.8967201110518, 00:28:08.938 "mibps": 376.23709001388147, 00:28:08.938 "io_failed": 0, 00:28:08.938 "io_timeout": 0, 00:28:08.938 "avg_latency_us": 5313.496092868989, 00:28:08.938 "min_latency_us": 535.8933333333333, 00:28:08.938 "max_latency_us": 12997.973333333333 00:28:08.938 } 00:28:08.938 ], 00:28:08.938 "core_count": 1 00:28:08.938 } 00:28:08.938 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:08.938 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:08.938 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:08.938 | .driver_specific 00:28:08.938 | .nvme_error 00:28:08.938 | .status_code 00:28:08.938 | .command_transient_transport_error' 00:28:08.938 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2901245 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2901245 ']' 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2901245 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2901245 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2901245' 00:28:09.198 killing process with pid 2901245 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2901245 00:28:09.198 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.198 00:28:09.198 Latency(us) 00:28:09.198 [2024-12-05T13:18:15.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.198 [2024-12-05T13:18:15.498Z] =================================================================================================================== 00:28:09.198 [2024-12-05T13:18:15.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.198 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2901245 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2902038 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2902038 /var/tmp/bperf.sock 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2902038 ']' 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.458 14:18:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.458 [2024-12-05 14:18:15.579777] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:09.458 [2024-12-05 14:18:15.579832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902038 ] 00:28:09.458 [2024-12-05 14:18:15.665629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.458 [2024-12-05 14:18:15.694652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.401 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.663 nvme0n1 00:28:10.663 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:10.663 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.663 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.663 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.663 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:10.663 14:18:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.926 Running I/O for 2 seconds... 00:28:10.926 [2024-12-05 14:18:17.042906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eed920 00:28:10.926 [2024-12-05 14:18:17.043886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.043914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.051398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eec840 00:28:10.926 [2024-12-05 14:18:17.052350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.059878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeb760 00:28:10.926 [2024-12-05 14:18:17.060867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.060885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.068334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eea680 00:28:10.926 [2024-12-05 14:18:17.069335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.069352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.076807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee95a0 00:28:10.926 [2024-12-05 14:18:17.077786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.077803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.085265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee84c0 00:28:10.926 [2024-12-05 14:18:17.086244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.086262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.093717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee73e0 00:28:10.926 [2024-12-05 14:18:17.094694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.094711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.102187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee3060 00:28:10.926 [2024-12-05 14:18:17.103167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.103185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.110634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4140 00:28:10.926 [2024-12-05 14:18:17.111621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.111638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.119059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5220 00:28:10.926 [2024-12-05 14:18:17.120042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.120059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.127513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6300 00:28:10.926 [2024-12-05 14:18:17.128504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.128521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.135948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef31b8 00:28:10.926 [2024-12-05 14:18:17.136921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.136938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.144364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef20d8 00:28:10.926 [2024-12-05 14:18:17.145341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.145357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.152795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0ff8 00:28:10.926 [2024-12-05 14:18:17.153744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.153760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.161206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeff18 00:28:10.926 [2024-12-05 14:18:17.162146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.162169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.169618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeee38 00:28:10.926 [2024-12-05 14:18:17.170608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.170624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.178064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eedd58 00:28:10.926 [2024-12-05 14:18:17.179060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.179076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.186502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eecc78 00:28:10.926 [2024-12-05 14:18:17.187477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.187494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.194929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eebb98 00:28:10.926 [2024-12-05 14:18:17.195924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.195941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.203332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeaab8 00:28:10.926 [2024-12-05 14:18:17.204338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.204354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.211773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee99d8 00:28:10.926 [2024-12-05 14:18:17.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.212740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:10.926 [2024-12-05 14:18:17.220195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee88f8 00:28:10.926 [2024-12-05 14:18:17.221195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:10.926 [2024-12-05 14:18:17.221211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.228643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee7818 00:28:11.190 [2024-12-05 14:18:17.229624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.229639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.237079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee2c28 00:28:11.190 [2024-12-05 14:18:17.238058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.238074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.245503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee3d08 00:28:11.190 [2024-12-05 14:18:17.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.246507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.253897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4de8 00:28:11.190 [2024-12-05 14:18:17.254874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.254890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.262303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5ec8 00:28:11.190 [2024-12-05 14:18:17.263294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.263310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.270730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6fa8 00:28:11.190 [2024-12-05 14:18:17.271711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.271727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.279142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef2d80 00:28:11.190 [2024-12-05 14:18:17.280122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.280138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.287562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef1ca0 00:28:11.190 [2024-12-05 14:18:17.288518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.288535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.295960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0bc0 00:28:11.190 [2024-12-05 14:18:17.296955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.296970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.304364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eefae0 00:28:11.190 [2024-12-05 14:18:17.305337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.305353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.312773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeea00 00:28:11.190 [2024-12-05 14:18:17.313757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.313774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.321198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eed920 00:28:11.190 [2024-12-05 14:18:17.322181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.322197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.329278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efcdd0 00:28:11.190 [2024-12-05 14:18:17.330179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.330195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.337781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee49b0 00:28:11.190 [2024-12-05 14:18:17.338631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.190 [2024-12-05 14:18:17.338647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:11.190 [2024-12-05 14:18:17.346205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef35f0 00:28:11.190 [2024-12-05 14:18:17.347115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.347131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.354674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee49b0 00:28:11.191 [2024-12-05 14:18:17.355519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.355535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.363239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5a90 00:28:11.191 [2024-12-05 14:18:17.364102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.364119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.371663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6b70 00:28:11.191 [2024-12-05 14:18:17.372502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.372518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.380092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef2948 00:28:11.191 [2024-12-05 14:18:17.380968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.380987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.388489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee3498 00:28:11.191 [2024-12-05 14:18:17.389352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.389367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.396879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee23b8 00:28:11.191 [2024-12-05 14:18:17.397699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.397715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.405298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8088 00:28:11.191 [2024-12-05 14:18:17.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.406206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.413737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee9168 00:28:11.191 [2024-12-05 14:18:17.414603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.414618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.422143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeaef0 00:28:11.191 [2024-12-05 14:18:17.422963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.422979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.430561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eebfd0 00:28:11.191 [2024-12-05 14:18:17.431415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.438951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eed0b0 00:28:11.191 [2024-12-05 14:18:17.439822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.439838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.447359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef20d8 00:28:11.191 [2024-12-05 14:18:17.448218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.448234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.455777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef7538 00:28:11.191 [2024-12-05 14:18:17.456632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.456647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.464201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6458 00:28:11.191 [2024-12-05 14:18:17.465058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.465074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.472625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef5378 00:28:11.191 [2024-12-05 14:18:17.473440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.473459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.191 [2024-12-05 14:18:17.481043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef4298 00:28:11.191 [2024-12-05 14:18:17.481917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.191 [2024-12-05 14:18:17.481932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.454 [2024-12-05 14:18:17.489446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4578 00:28:11.454 [2024-12-05 14:18:17.490263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.454 [2024-12-05 14:18:17.490279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.454 [2024-12-05 14:18:17.497870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5658 00:28:11.454 [2024-12-05 14:18:17.498738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.498754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.506294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6738 00:28:11.455 [2024-12-05 14:18:17.507179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.507195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.514713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef35f0 00:28:11.455 [2024-12-05 14:18:17.515530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.515546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.523118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef3e60 00:28:11.455 [2024-12-05 14:18:17.523940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.523956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.531536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee27f0 00:28:11.455 [2024-12-05 14:18:17.532398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.532414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.539941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee7c50 00:28:11.455 [2024-12-05 14:18:17.540802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.540818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.548358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8d30 00:28:11.455 [2024-12-05 14:18:17.549232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.549248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.556777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eea248 00:28:11.455 [2024-12-05 14:18:17.557598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.557614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.565465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efef90 00:28:11.455 [2024-12-05 14:18:17.566432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.566448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.573899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efa3a0 00:28:11.455 [2024-12-05 14:18:17.574885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.574900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.582327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef31b8 00:28:11.455 [2024-12-05 14:18:17.583262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.583278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.590886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6300 00:28:11.455 [2024-12-05 14:18:17.591868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.591885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.599306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5220 00:28:11.455 [2024-12-05 14:18:17.600282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.600301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.607728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eee5c8 00:28:11.455 [2024-12-05 14:18:17.608719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.608735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.616569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeaab8 00:28:11.455 [2024-12-05 14:18:17.617665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.617681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.624995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eec840 00:28:11.455 [2024-12-05 14:18:17.626082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.626098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.633446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeaab8 00:28:11.455 [2024-12-05 14:18:17.634493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.634508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.642022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eebb98 00:28:11.455 [2024-12-05 14:18:17.643123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.643139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.650458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eecc78 00:28:11.455 [2024-12-05 14:18:17.651559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.651574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.658863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef5378 00:28:11.455 [2024-12-05 14:18:17.659948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.659964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.666741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8d30 00:28:11.455 [2024-12-05 14:18:17.667807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.675990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edf118 00:28:11.455 [2024-12-05 14:18:17.677216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.677232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.684441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0350 00:28:11.455 [2024-12-05 14:18:17.685659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.685675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.692900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef31b8 00:28:11.455 [2024-12-05 14:18:17.694109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.694124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.699820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4de8 00:28:11.455 [2024-12-05 14:18:17.700565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.700580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.708209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eee5c8 00:28:11.455 [2024-12-05 14:18:17.708934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.708950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.716617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eef6a8 00:28:11.455 [2024-12-05 14:18:17.717350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.455 [2024-12-05 14:18:17.717366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.455 [2024-12-05 14:18:17.725010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0788 00:28:11.455 [2024-12-05 14:18:17.725745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.456 [2024-12-05 14:18:17.725762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.456 [2024-12-05 14:18:17.733443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee0ea0 00:28:11.456 [2024-12-05 14:18:17.734173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.456 [2024-12-05 14:18:17.734189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.456 [2024-12-05 14:18:17.741868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef92c0 00:28:11.456 [2024-12-05 14:18:17.742594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.456 [2024-12-05 14:18:17.742610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.456 [2024-12-05 14:18:17.750279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef81e0 00:28:11.717 [2024-12-05 14:18:17.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.751032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.758688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4140 00:28:11.717 [2024-12-05 14:18:17.759398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.759413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.767096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef46d0 00:28:11.717 [2024-12-05 14:18:17.767825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.767841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.775518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee01f8 00:28:11.717 [2024-12-05 14:18:17.776258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.776273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.783947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edf118 00:28:11.717 [2024-12-05 14:18:17.784681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.784696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.792377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eddc00 00:28:11.717 [2024-12-05 14:18:17.793128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.793144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.800783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef9b30 00:28:11.717 [2024-12-05 14:18:17.801526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.801542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.809197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef7538 00:28:11.717 [2024-12-05 14:18:17.809892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.809907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.817607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6458 00:28:11.717 [2024-12-05 14:18:17.818341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.818357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.826031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee88f8 00:28:11.717 [2024-12-05 14:18:17.826739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.826754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.834471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef57b0 00:28:11.717 [2024-12-05 14:18:17.835198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.835213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.842876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eed4e8 00:28:11.717 [2024-12-05 14:18:17.843612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.843627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.851280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eef270 00:28:11.717 [2024-12-05 14:18:17.852016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.852032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.859689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0350 00:28:11.717 [2024-12-05 14:18:17.860419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.860434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.868098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef1430 00:28:11.717 [2024-12-05 14:18:17.868845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.868861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.717 [2024-12-05 14:18:17.876527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee12d8 00:28:11.717 [2024-12-05 14:18:17.877271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.717 [2024-12-05 14:18:17.877286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.884366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eea680 00:28:11.718 [2024-12-05 14:18:17.885099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.885115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.893612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee73e0 00:28:11.718 [2024-12-05 14:18:17.894451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.894471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.902030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efb8b8 00:28:11.718 [2024-12-05 14:18:17.902886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.902901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.910591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6738 00:28:11.718 [2024-12-05 14:18:17.911445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.911464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.919006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5658 00:28:11.718 [2024-12-05 14:18:17.919860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.919876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.927428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeee38 00:28:11.718 [2024-12-05 14:18:17.928286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.928303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.935871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eed920 00:28:11.718 [2024-12-05 14:18:17.936722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.936738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.944260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efd208 00:28:11.718 [2024-12-05 14:18:17.945132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.945148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.952656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efe2e8 00:28:11.718 [2024-12-05 14:18:17.953523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.953540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.961071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efeb58 00:28:11.718 [2024-12-05 14:18:17.961924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.961940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.969482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efac10 00:28:11.718 [2024-12-05 14:18:17.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.970353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.977955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efc128 00:28:11.718 [2024-12-05 14:18:17.978811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.978827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.986375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee3060 00:28:11.718 [2024-12-05 14:18:17.987263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.987279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:17.994804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee73e0 00:28:11.718 [2024-12-05 14:18:17.995666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:17.995682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:18.003215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eec408 00:28:11.718 [2024-12-05 14:18:18.004091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:18.004106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.718 [2024-12-05 14:18:18.011635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6890 00:28:11.718 [2024-12-05 14:18:18.012482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.718 [2024-12-05 14:18:18.012499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.020060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8d30 00:28:11.980 [2024-12-05 14:18:18.020911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.020928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.028476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef5378 00:28:11.980 [2024-12-05 14:18:18.029537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.029553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.980 30037.00 IOPS, 117.33 MiB/s [2024-12-05T13:18:18.280Z] [2024-12-05 14:18:18.036951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6b70 00:28:11.980 [2024-12-05 14:18:18.037799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.037815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.045358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef2510 00:28:11.980 [2024-12-05 14:18:18.046212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.046228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.053806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efc560 00:28:11.980 [2024-12-05 14:18:18.054665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.054681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.062220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee1f80 00:28:11.980 [2024-12-05 14:18:18.063068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.063084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.070639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ede470 00:28:11.980 [2024-12-05 14:18:18.071509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.071525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.079041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef3e60 00:28:11.980 [2024-12-05 14:18:18.079906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.079922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.087449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eecc78 00:28:11.980 [2024-12-05 14:18:18.088320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.088335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.095866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6020 00:28:11.980 [2024-12-05 14:18:18.096720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.096735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.104282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4de8 00:28:11.980 [2024-12-05 14:18:18.105138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.105154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.112714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6300 00:28:11.980 [2024-12-05 14:18:18.113566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.113584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.121107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eef270 00:28:11.980 [2024-12-05 14:18:18.121961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.121978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.129514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efcdd0 00:28:11.980 [2024-12-05 14:18:18.130363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.130379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.137924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efeb58 00:28:11.980 [2024-12-05 14:18:18.138759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.138775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.146352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efc128 00:28:11.980 [2024-12-05 14:18:18.147198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.147214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.154762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee73e0 00:28:11.980 [2024-12-05 14:18:18.155610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.155627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.163171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6890 00:28:11.980 [2024-12-05 14:18:18.164029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.164045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.171579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef5378 00:28:11.980 [2024-12-05 14:18:18.172395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.172411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.179986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6b70 00:28:11.980 [2024-12-05 14:18:18.180853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.180869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.188421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef2510 00:28:11.980 [2024-12-05 14:18:18.189296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.189313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.196842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efc560 00:28:11.980 [2024-12-05 14:18:18.197668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.197685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.205292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee1f80 00:28:11.980 [2024-12-05 14:18:18.206107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.206123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.213729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ede470 00:28:11.980 [2024-12-05 14:18:18.214562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.214578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.222419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0ff8 00:28:11.980 [2024-12-05 14:18:18.223380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.223396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.230837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eddc00 00:28:11.980 [2024-12-05 14:18:18.231827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.231842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.239266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eee5c8 00:28:11.980 [2024-12-05 14:18:18.240229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.247866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef1ca0 00:28:11.980 [2024-12-05 14:18:18.248840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.248855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.256286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef9f68 00:28:11.980 [2024-12-05 14:18:18.257251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.257268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.264709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edece0 00:28:11.980 [2024-12-05 14:18:18.265685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:11.980 [2024-12-05 14:18:18.273137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eefae0 00:28:11.980 [2024-12-05 14:18:18.274112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.980 [2024-12-05 14:18:18.274130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.281577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4140 00:28:12.241 [2024-12-05 14:18:18.282552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.282569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.290018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef46d0 00:28:12.241 [2024-12-05 14:18:18.291004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.291021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.298476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee01f8 00:28:12.241 [2024-12-05 14:18:18.299398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.299414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.307172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efc128 00:28:12.241 [2024-12-05 14:18:18.308253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.308270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.315747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5ec8 00:28:12.241 [2024-12-05 14:18:18.316846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.316862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.324176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8088 00:28:12.241 [2024-12-05 14:18:18.325269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.325284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.332614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efa7d8 00:28:12.241 [2024-12-05 14:18:18.333733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.333752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.341059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0350 00:28:12.241 [2024-12-05 14:18:18.342129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.342145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.348911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee23b8 00:28:12.241 [2024-12-05 14:18:18.349997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.350013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.356882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeb760 00:28:12.241 [2024-12-05 14:18:18.357624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.357640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.365226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef1ca0 00:28:12.241 [2024-12-05 14:18:18.365923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.365939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.373681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee7c50 00:28:12.241 [2024-12-05 14:18:18.374409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.374425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.382115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef31b8 00:28:12.241 [2024-12-05 14:18:18.382842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.382858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.390540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4de8 00:28:12.241 [2024-12-05 14:18:18.391253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.391269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.398953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef35f0 00:28:12.241 [2024-12-05 14:18:18.399681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.399697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.407361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee49b0 00:28:12.241 [2024-12-05 14:18:18.408095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.408112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.415806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef4f40 00:28:12.241 [2024-12-05 14:18:18.416525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.416541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.424240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eee5c8 00:28:12.241 [2024-12-05 14:18:18.424947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.424963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.432682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0350 00:28:12.241 [2024-12-05 14:18:18.433403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.433418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.441090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eedd58 00:28:12.241 [2024-12-05 14:18:18.441835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.441851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.449521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee38d0 00:28:12.241 [2024-12-05 14:18:18.450236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.450251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.457361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee5220 00:28:12.241 [2024-12-05 14:18:18.458042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.458058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.466661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ede038 00:28:12.241 [2024-12-05 14:18:18.467495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.467510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.475243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efb8b8 00:28:12.241 [2024-12-05 14:18:18.476090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.476106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.483665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef3a28 00:28:12.241 [2024-12-05 14:18:18.484505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.484522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.492075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee2c28 00:28:12.241 [2024-12-05 14:18:18.492916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.492933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.500497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eed0b0 00:28:12.241 [2024-12-05 14:18:18.501358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.241 [2024-12-05 14:18:18.501375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.241 [2024-12-05 14:18:18.508956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6cc8 00:28:12.242 [2024-12-05 14:18:18.509804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.242 [2024-12-05 14:18:18.509820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.242 [2024-12-05 14:18:18.517372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef5be8 00:28:12.242 [2024-12-05 14:18:18.518199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.242 [2024-12-05 14:18:18.518215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.242 [2024-12-05 14:18:18.525810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edf988 00:28:12.242 [2024-12-05 14:18:18.526660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.242 [2024-12-05 14:18:18.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.242 [2024-12-05 14:18:18.534229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0788 00:28:12.242 [2024-12-05 14:18:18.535083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.242 [2024-12-05 14:18:18.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.542658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee0ea0 00:28:12.503 [2024-12-05 14:18:18.543508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.543524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.551078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edece0 00:28:12.503 [2024-12-05 14:18:18.551928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.551947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.559534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee23b8 00:28:12.503 [2024-12-05 14:18:18.560380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.560396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.568008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eec840 00:28:12.503 [2024-12-05 14:18:18.568861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.568877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.576430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6458 00:28:12.503 [2024-12-05 14:18:18.577292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.577307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.584834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eebb98 00:28:12.503 [2024-12-05 14:18:18.585656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.585673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.593248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efa3a0 00:28:12.503 [2024-12-05 14:18:18.594091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.594107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.601685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efc560 00:28:12.503 [2024-12-05 14:18:18.602535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.602551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.610251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efd640 00:28:12.503 [2024-12-05 14:18:18.611104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.611120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.618693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee1f80 00:28:12.503 [2024-12-05 14:18:18.619551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.619567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.627117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efef90 00:28:12.503 [2024-12-05 14:18:18.627967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.627985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.635547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ede470 00:28:12.503 [2024-12-05 14:18:18.636388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.636404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.643977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efb480 00:28:12.503 [2024-12-05 14:18:18.644846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.644862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.652407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef3e60 00:28:12.503 [2024-12-05 14:18:18.653262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.653277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.660849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6020 00:28:12.503 [2024-12-05 14:18:18.661658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.661674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.669284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edf550 00:28:12.503 [2024-12-05 14:18:18.670153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.503 [2024-12-05 14:18:18.670168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.503 [2024-12-05 14:18:18.677711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef1430 00:28:12.504 [2024-12-05 14:18:18.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.678570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.686116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee12d8 00:28:12.504 [2024-12-05 14:18:18.686958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.686974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.694662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eeea00 00:28:12.504 [2024-12-05 14:18:18.695512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.695528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.703089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eec408 00:28:12.504 [2024-12-05 14:18:18.703943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.703959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.711525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6890 00:28:12.504 [2024-12-05 14:18:18.712369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.712385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.719927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8d30 00:28:12.504 [2024-12-05 14:18:18.720763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.720779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.728577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef81e0 00:28:12.504 [2024-12-05 14:18:18.729143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.729159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.737172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee01f8 00:28:12.504 [2024-12-05 14:18:18.738123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.738139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.746693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efb048 00:28:12.504 [2024-12-05 14:18:18.748101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.748116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.754181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eedd58 00:28:12.504 [2024-12-05 14:18:18.755016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.755032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.762882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efa3a0 00:28:12.504 [2024-12-05 14:18:18.763904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.763920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.771299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee84c0 00:28:12.504 [2024-12-05 14:18:18.772361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.772376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.779718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6fa8 00:28:12.504 [2024-12-05 14:18:18.780775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.780791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.789232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef7538 00:28:12.504 [2024-12-05 14:18:18.790761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.790778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:12.504 [2024-12-05 14:18:18.795227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eefae0 00:28:12.504 [2024-12-05 14:18:18.795939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.504 [2024-12-05 14:18:18.795955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:12.766 [2024-12-05 14:18:18.803678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efe720 00:28:12.766 [2024-12-05 14:18:18.804416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.766 [2024-12-05 14:18:18.804431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:12.766 [2024-12-05 14:18:18.812102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee9e10 00:28:12.766 [2024-12-05 14:18:18.812818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.766 [2024-12-05 14:18:18.812833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:12.766 [2024-12-05 14:18:18.820542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eefae0 00:28:12.766 [2024-12-05 14:18:18.821256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.766 [2024-12-05 14:18:18.821271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:12.766 [2024-12-05 14:18:18.829125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6300 00:28:12.766 [2024-12-05 14:18:18.829853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.766 [2024-12-05 14:18:18.829869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:12.766 [2024-12-05 14:18:18.837027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee7c50 00:28:12.766 [2024-12-05 14:18:18.837740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.766 [2024-12-05 14:18:18.837755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:12.766 [2024-12-05 14:18:18.847097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee1b48 00:28:12.767 [2024-12-05 14:18:18.848260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.848278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.854564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee6300 00:28:12.767 [2024-12-05 14:18:18.855049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.855065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.863318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef9f68 00:28:12.767 [2024-12-05 14:18:18.864175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.864192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.871744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eebfd0 00:28:12.767 [2024-12-05 14:18:18.872581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.872598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.880182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef7100 00:28:12.767 [2024-12-05 14:18:18.881021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.881038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.888620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016eecc78 00:28:12.767 [2024-12-05 14:18:18.889463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.889480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.897031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee27f0 00:28:12.767 [2024-12-05 14:18:18.897866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.897883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.905435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ede8a8 00:28:12.767 [2024-12-05 14:18:18.906233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.906249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.913849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee0a68 00:28:12.767 [2024-12-05 14:18:18.914718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.914735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.922275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee4578 00:28:12.767 [2024-12-05 14:18:18.923113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.923129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.930700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef4298 00:28:12.767 [2024-12-05 14:18:18.931402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.931417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.939417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef31b8 00:28:12.767 [2024-12-05 14:18:18.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.940410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.947849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6020 00:28:12.767 [2024-12-05 14:18:18.948667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.948683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.956269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6458 00:28:12.767 [2024-12-05 14:18:18.957220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.957236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.964694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef31b8 00:28:12.767 [2024-12-05 14:18:18.965660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.965676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.973277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ee8d30 00:28:12.767 [2024-12-05 14:18:18.974193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.974209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.981712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef6890 00:28:12.767 [2024-12-05 14:18:18.982671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.982687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.990127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef0788 00:28:12.767 [2024-12-05 14:18:18.991080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.991096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:18.998546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef2948 00:28:12.767 [2024-12-05 14:18:18.999511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:18.999528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:19.006953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef3a28 00:28:12.767 [2024-12-05 14:18:19.007905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:19.007921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:19.015381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016efb8b8 00:28:12.767 [2024-12-05 14:18:19.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:19.016350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 [2024-12-05 14:18:19.023819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016ef1ca0 00:28:12.767 [2024-12-05 14:18:19.024775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:19.024790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 30166.50 IOPS, 117.84 MiB/s [2024-12-05T13:18:19.067Z] [2024-12-05 14:18:19.032224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc510) with pdu=0x200016edf550 00:28:12.767 [2024-12-05 14:18:19.033158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:12.767 [2024-12-05 14:18:19.033172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:12.767 00:28:12.767 Latency(us) 00:28:12.767 [2024-12-05T13:18:19.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:12.767 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:12.767 nvme0n1 : 2.00 30182.49 117.90 0.00 0.00 4235.71 2280.11 14199.47 00:28:12.767 [2024-12-05T13:18:19.067Z] =================================================================================================================== 00:28:12.767 [2024-12-05T13:18:19.067Z] Total : 30182.49 117.90 0.00 0.00 4235.71 2280.11 14199.47 00:28:12.767 { 00:28:12.767 "results": [ 00:28:12.767 { 00:28:12.767 "job": "nvme0n1", 00:28:12.767 "core_mask": "0x2", 00:28:12.767 "workload": "randwrite", 00:28:12.767 "status": "finished", 00:28:12.767 "queue_depth": 128, 00:28:12.767 "io_size": 4096, 00:28:12.767 "runtime": 2.003181, 00:28:12.767 "iops": 30182.49474211267, 00:28:12.767 "mibps": 117.90037008637762, 00:28:12.767 "io_failed": 0, 00:28:12.767 "io_timeout": 0, 00:28:12.767 "avg_latency_us": 4235.705885557081, 00:28:12.767 "min_latency_us": 2280.1066666666666, 00:28:12.767 "max_latency_us": 14199.466666666667 00:28:12.767 } 00:28:12.767 ], 00:28:12.767 "core_count": 1 00:28:12.767 } 00:28:12.767 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:12.767 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:12.767 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:12.767 | .driver_specific 00:28:12.767 | .nvme_error 00:28:12.767 | .status_code 00:28:12.767 | .command_transient_transport_error' 00:28:12.767 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2902038 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2902038 ']' 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2902038 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902038 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902038' 00:28:13.027 killing process with pid 2902038 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2902038 00:28:13.027 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.027 00:28:13.027 Latency(us) 00:28:13.027 [2024-12-05T13:18:19.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.027 [2024-12-05T13:18:19.327Z] =================================================================================================================== 00:28:13.027 [2024-12-05T13:18:19.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.027 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2902038 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2902726 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2902726 /var/tmp/bperf.sock 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2902726 ']' 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.287 14:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:13.287 [2024-12-05 14:18:19.453503] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:13.287 [2024-12-05 14:18:19.453558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902726 ] 00:28:13.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:13.287 Zero copy mechanism will not be used. 00:28:13.287 [2024-12-05 14:18:19.535292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.287 [2024-12-05 14:18:19.564975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.227 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:14.227 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:14.227 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.227 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:14.228 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:14.228 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.228 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.228 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.228 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.228 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.799 nvme0n1 00:28:14.799 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:14.799 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.799 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.799 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.799 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:14.799 14:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.799 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:14.799 Zero copy mechanism will not be used. 00:28:14.799 Running I/O for 2 seconds... 00:28:14.799 [2024-12-05 14:18:20.947623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.947931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.947956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:20.954930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.955226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.955247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:20.963420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.963481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.963501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:20.972071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.972123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.972139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:20.977956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.978253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.978270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:20.986276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.986340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.986355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:20.995038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:20.995293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:20.995308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.004998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.005050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.005066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.011846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.011903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.011919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.020634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.020699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.020714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.029570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.029637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.029653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.035578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.035836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.035852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.045579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.045631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.045647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.049899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.799 [2024-12-05 14:18:21.049964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.799 [2024-12-05 14:18:21.049980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:14.799 [2024-12-05 14:18:21.059032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.800 [2024-12-05 14:18:21.059222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.800 [2024-12-05 14:18:21.059237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:14.800 [2024-12-05 14:18:21.066500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.800 [2024-12-05 14:18:21.066809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.800 [2024-12-05 14:18:21.066826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:14.800 [2024-12-05 14:18:21.073720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.800 [2024-12-05 14:18:21.073803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.800 [2024-12-05 14:18:21.073819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:14.800 [2024-12-05 14:18:21.077862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.800 [2024-12-05 14:18:21.078159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.800 [2024-12-05 14:18:21.078175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:14.800 [2024-12-05 14:18:21.084090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.800 [2024-12-05 14:18:21.084146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.800 [2024-12-05 14:18:21.084161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:14.800 [2024-12-05 14:18:21.092419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:14.800 [2024-12-05 14:18:21.092477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.800 [2024-12-05 14:18:21.092493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.100143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.100407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.110297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.110588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.110604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.121962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.122189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.122205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.133859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.134095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.134110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.145469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.145777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.145793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.156409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.156716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.156733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.167384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.167579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.167596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.177698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.177967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.177983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.187709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.187995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.188014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.198195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.198458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.198473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.208937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.209167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.209183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.218657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.218923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.218939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.230177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.230684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.230702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.241552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.241875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.241892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.251227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.251499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.251516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.262198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.262419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.262436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.273647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.274064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.274081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.285376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.285743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.285761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.297449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.297684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.297700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.308830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.309155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.309172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.320391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.320621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.320637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.332213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.060 [2024-12-05 14:18:21.332513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.060 [2024-12-05 14:18:21.332529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.060 [2024-12-05 14:18:21.343362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.061 [2024-12-05 14:18:21.343585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-12-05 14:18:21.343601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.061 [2024-12-05 14:18:21.354557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.061 [2024-12-05 14:18:21.354886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.061 [2024-12-05 14:18:21.354904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.366131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.366413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.366430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.377974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.378274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.378291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.390077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.390376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.390394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.401587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.401909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.401926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.413610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.414081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.414098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.425184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.425424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.425447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.436559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.436865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.436882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.444482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.444852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.444869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.451678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.451855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.451871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.459933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.460203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.460220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.466276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.466452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.466477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.474513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.474690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.474706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.482649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.482826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.482842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.490866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.491185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.491202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.496685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.496861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.321 [2024-12-05 14:18:21.496877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.321 [2024-12-05 14:18:21.502585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.321 [2024-12-05 14:18:21.502755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.502771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.509268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.509588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.509605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.518462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.518771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.518788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.526470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.526696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.526713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.534032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.534204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.534220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.540122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.540166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.540181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.547953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.548013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.548028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.557193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.557246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.557261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.563107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.563151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.563166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.571257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.571477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.571492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.581554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.581634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.581650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.589340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.589482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.589497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.598575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.598874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.598890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.607609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.607690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.607705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.322 [2024-12-05 14:18:21.615636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.322 [2024-12-05 14:18:21.615726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.322 [2024-12-05 14:18:21.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.625611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.625727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.625743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.634244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.634537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.634553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.642603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.642856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.642872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.646957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.647014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.647031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.654612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.654677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.654693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.660854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.660905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.660921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.670680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.670751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.670769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.678148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.581 [2024-12-05 14:18:21.678488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.581 [2024-12-05 14:18:21.678505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.581 [2024-12-05 14:18:21.687040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.687092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.687107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.694589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.694634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.694649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.704053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.704338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.704353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.710935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.710979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.710996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.716244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.716289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.716304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.724869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.724933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.724948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.733186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.733368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.733384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.742556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.742623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.742639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.750943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.750996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.751011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.758212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.758297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.758312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.765808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.765876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.765892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.771898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.771949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.771965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.778196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.778242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.778258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.784727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.784770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.784785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.791260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.791312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.791327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.798066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.798362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.798378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.806497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.806555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.806570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.813991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.814235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.814251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.822487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.822737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.822753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.829002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.829073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.829088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.836539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.836613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.836628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.845923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.845987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.846002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.855251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.855556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.855572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.864103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.864159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.864174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.582 [2024-12-05 14:18:21.874095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.582 [2024-12-05 14:18:21.874146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.582 [2024-12-05 14:18:21.874164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.883051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.883117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.883132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.892289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.892344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.892360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.900982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.901103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.901118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.910137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.910209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.910224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.918812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.918868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.918883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.925470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.925754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.925771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.933575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.933747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.933763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.942322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.942383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.942398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 3552.00 IOPS, 444.00 MiB/s [2024-12-05T13:18:22.142Z] [2024-12-05 14:18:21.951552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.951725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.951741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.960751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.960924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.960939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.965739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.965795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.970590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.970643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.970658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.976640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.976934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.976952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.985924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.986326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.986342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:21.994528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:21.994612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:21.994627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.002389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.002445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.002467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.010994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.011038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.011053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.019106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.019159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.019175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.026188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.026442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.026462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.033406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.033451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.033471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.039439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.039510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.039525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.045849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.045900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.045916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.052833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.052885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.052900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.057390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.842 [2024-12-05 14:18:22.057453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.842 [2024-12-05 14:18:22.057481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.842 [2024-12-05 14:18:22.063915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.063994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.064009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.071715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.071792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.071810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.080598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.080781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.080796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.089093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.089169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.089184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.098079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.098203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.098218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.109155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.109398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.109413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.119973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.120037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.120052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.129277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.129325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.129340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:15.843 [2024-12-05 14:18:22.133831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:15.843 [2024-12-05 14:18:22.133889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:15.843 [2024-12-05 14:18:22.133904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.103 [2024-12-05 14:18:22.141535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.103 [2024-12-05 14:18:22.141597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.103 [2024-12-05 14:18:22.141612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.103 [2024-12-05 14:18:22.149070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.103 [2024-12-05 14:18:22.149351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.103 [2024-12-05 14:18:22.149366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.103 [2024-12-05 14:18:22.158681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.103 [2024-12-05 14:18:22.158967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.103 [2024-12-05 14:18:22.158983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.103 [2024-12-05 14:18:22.167144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.103 [2024-12-05 14:18:22.167445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.103 [2024-12-05 14:18:22.167465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.103 [2024-12-05 14:18:22.172512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.172556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.172572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.178497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.178548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.178563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.186614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.186680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.186694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.193345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.193405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.193420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.201445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.201516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.201531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.208502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.208561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.208575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.217927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.218116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.226820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.227109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.227125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.233839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.233908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.233923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.242228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.242283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.242298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.250051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.250105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.250120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.259106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.259172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.259188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.265471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.265521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.265536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.273197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.273260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.273276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.278627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.278679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.278696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.283116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.283167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.283182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.288836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.288946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.288961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.296300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.296348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.296363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.306597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.306648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.306663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.315351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.315410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.315425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.325463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.325591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.325606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.336273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.336328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.336344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.345928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.345980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.345995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.355484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.355593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.355607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.362731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.362787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.362802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.370078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.370263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.370278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.379159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.104 [2024-12-05 14:18:22.379244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.104 [2024-12-05 14:18:22.379259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.104 [2024-12-05 14:18:22.385820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.105 [2024-12-05 14:18:22.386046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.105 [2024-12-05 14:18:22.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.105 [2024-12-05 14:18:22.394393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.105 [2024-12-05 14:18:22.394467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.105 [2024-12-05 14:18:22.394483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.404487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.404752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.404767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.415908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.416180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.416197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.427496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.427725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.427740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.438675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.438725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.438740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.449517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.449828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.449844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.460886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.460963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.460978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.468554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.468606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.468621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.474320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.474412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.474428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.482330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.482459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.482474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.487903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.488058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.488073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.497707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.497765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.497780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.506273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.506577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.506593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.515607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.515883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.515898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.524692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.524748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.524763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.532450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.532501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.532516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.542003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.542295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.542311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.552572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.552805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.552820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.564272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.564329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.365 [2024-12-05 14:18:22.575234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.365 [2024-12-05 14:18:22.575301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.365 [2024-12-05 14:18:22.575316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.583502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.583554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.583569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.592852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.592901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.592919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.601626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.601691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.601706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.608935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.609000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.609016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.618999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.619043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.619058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.627116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.627385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.627401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.636278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.636392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.636408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.645766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.645814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.645829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.366 [2024-12-05 14:18:22.654220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.366 [2024-12-05 14:18:22.654402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.366 [2024-12-05 14:18:22.654417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.662533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.662586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.670524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.670749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.670764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.678688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.678989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.679005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.687757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.687839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.687855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.699183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.699500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.699517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.710197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.710462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.710477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.721755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.721819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.721834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.733390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.733449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.733468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.745118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.745395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.745411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.756306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.756702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.768317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.768387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.768403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.778962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.779251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.779267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.790379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.790617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.626 [2024-12-05 14:18:22.790632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.626 [2024-12-05 14:18:22.801884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.626 [2024-12-05 14:18:22.802118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.802134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.812849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.813154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.813170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.824854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.824959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.824974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.836830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.836907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.836922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.848287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.848541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.848557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.859869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.860090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.860109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.871214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.871527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.871543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.883197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.883260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.894824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.894982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.894997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.906192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.906468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.906484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.627 [2024-12-05 14:18:22.917833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.627 [2024-12-05 14:18:22.918106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.627 [2024-12-05 14:18:22.918121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:16.905 [2024-12-05 14:18:22.929243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.905 [2024-12-05 14:18:22.929538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.905 [2024-12-05 14:18:22.929555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:16.905 [2024-12-05 14:18:22.940970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.905 [2024-12-05 14:18:22.941273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.905 [2024-12-05 14:18:22.941289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:16.905 3515.50 IOPS, 439.44 MiB/s [2024-12-05T13:18:23.205Z] [2024-12-05 14:18:22.952344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xcfc850) with pdu=0x200016eff3c8 00:28:16.905 [2024-12-05 14:18:22.952646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.905 [2024-12-05 14:18:22.952660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:16.905 00:28:16.905 Latency(us) 00:28:16.905 [2024-12-05T13:18:23.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.905 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:16.905 nvme0n1 : 2.01 3512.53 439.07 0.00 0.00 4546.32 1713.49 12397.23 00:28:16.905 [2024-12-05T13:18:23.205Z] =================================================================================================================== 00:28:16.905 [2024-12-05T13:18:23.205Z] Total : 3512.53 439.07 0.00 0.00 4546.32 1713.49 12397.23 00:28:16.905 { 00:28:16.905 "results": [ 00:28:16.905 { 00:28:16.905 "job": "nvme0n1", 00:28:16.905 "core_mask": "0x2", 00:28:16.905 "workload": "randwrite", 00:28:16.905 "status": "finished", 00:28:16.905 "queue_depth": 16, 00:28:16.905 "io_size": 131072, 00:28:16.905 "runtime": 2.007385, 00:28:16.905 "iops": 3512.5299830376334, 00:28:16.905 "mibps": 439.0662478797042, 00:28:16.905 "io_failed": 0, 00:28:16.905 "io_timeout": 0, 00:28:16.905 "avg_latency_us": 4546.322310783341, 00:28:16.905 "min_latency_us": 1713.4933333333333, 00:28:16.905 "max_latency_us": 12397.226666666667 00:28:16.905 } 00:28:16.905 ], 00:28:16.905 "core_count": 1 00:28:16.905 } 00:28:16.905 14:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:16.905 14:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:16.905 14:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:16.905 | .driver_specific 00:28:16.905 | .nvme_error 00:28:16.905 | .status_code 00:28:16.905 | .command_transient_transport_error' 00:28:16.905 14:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:16.905 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:28:16.905 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2902726 00:28:16.906 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2902726 ']' 00:28:16.906 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2902726 00:28:16.906 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:16.906 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:16.906 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902726 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902726' 00:28:17.165 killing process with pid 2902726 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2902726 00:28:17.165 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.165 00:28:17.165 Latency(us) 00:28:17.165 [2024-12-05T13:18:23.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.165 [2024-12-05T13:18:23.465Z] =================================================================================================================== 00:28:17.165 [2024-12-05T13:18:23.465Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2902726 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2899865 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2899865 ']' 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2899865 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2899865 00:28:17.165 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.166 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.166 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2899865' 00:28:17.166 killing process with pid 2899865 00:28:17.166 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2899865 00:28:17.166 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2899865 00:28:17.426 00:28:17.426 real 0m16.534s 00:28:17.426 user 0m32.921s 00:28:17.426 sys 0m3.451s 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:17.426 ************************************ 00:28:17.426 END TEST nvmf_digest_error 00:28:17.426 ************************************ 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.426 rmmod nvme_tcp 00:28:17.426 rmmod nvme_fabrics 00:28:17.426 rmmod nvme_keyring 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2899865 ']' 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2899865 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2899865 ']' 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2899865 00:28:17.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2899865) - No such process 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2899865 is not found' 00:28:17.426 Process with pid 2899865 is not found 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:17.426 14:18:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:19.971 00:28:19.971 real 0m43.648s 00:28:19.971 user 1m9.098s 00:28:19.971 sys 0m12.788s 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.971 ************************************ 00:28:19.971 END TEST nvmf_digest 00:28:19.971 ************************************ 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:19.971 14:18:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.972 ************************************ 00:28:19.972 START TEST nvmf_bdevperf 00:28:19.972 ************************************ 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:19.972 * Looking for test storage... 00:28:19.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:19.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.972 --rc genhtml_branch_coverage=1 00:28:19.972 --rc genhtml_function_coverage=1 00:28:19.972 --rc genhtml_legend=1 00:28:19.972 --rc geninfo_all_blocks=1 00:28:19.972 --rc geninfo_unexecuted_blocks=1 00:28:19.972 00:28:19.972 ' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:19.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.972 --rc genhtml_branch_coverage=1 00:28:19.972 --rc genhtml_function_coverage=1 00:28:19.972 --rc genhtml_legend=1 00:28:19.972 --rc geninfo_all_blocks=1 00:28:19.972 --rc geninfo_unexecuted_blocks=1 00:28:19.972 00:28:19.972 ' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:19.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.972 --rc genhtml_branch_coverage=1 00:28:19.972 --rc genhtml_function_coverage=1 00:28:19.972 --rc genhtml_legend=1 00:28:19.972 --rc geninfo_all_blocks=1 00:28:19.972 --rc geninfo_unexecuted_blocks=1 00:28:19.972 00:28:19.972 ' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:19.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:19.972 --rc genhtml_branch_coverage=1 00:28:19.972 --rc genhtml_function_coverage=1 00:28:19.972 --rc genhtml_legend=1 00:28:19.972 --rc geninfo_all_blocks=1 00:28:19.972 --rc geninfo_unexecuted_blocks=1 00:28:19.972 00:28:19.972 ' 00:28:19.972 14:18:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:19.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:19.972 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:19.973 14:18:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:28.117 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:28.117 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:28.117 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.117 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:28.117 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:28:28.118 00:28:28.118 --- 10.0.0.2 ping statistics --- 00:28:28.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.118 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:28:28.118 00:28:28.118 --- 10.0.0.1 ping statistics --- 00:28:28.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.118 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2907744 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2907744 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2907744 ']' 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.118 14:18:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.118 [2024-12-05 14:18:33.611516] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:28.118 [2024-12-05 14:18:33.611580] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.118 [2024-12-05 14:18:33.709694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:28.118 [2024-12-05 14:18:33.761643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.118 [2024-12-05 14:18:33.761692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.118 [2024-12-05 14:18:33.761701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.118 [2024-12-05 14:18:33.761708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.118 [2024-12-05 14:18:33.761714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.118 [2024-12-05 14:18:33.763813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.118 [2024-12-05 14:18:33.763973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.118 [2024-12-05 14:18:33.763976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.380 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.380 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:28.380 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.380 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.380 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.380 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.381 [2024-12-05 14:18:34.480282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.381 Malloc0 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.381 [2024-12-05 14:18:34.553694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.381 { 00:28:28.381 "params": { 00:28:28.381 "name": "Nvme$subsystem", 00:28:28.381 "trtype": "$TEST_TRANSPORT", 00:28:28.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.381 "adrfam": "ipv4", 00:28:28.381 "trsvcid": "$NVMF_PORT", 00:28:28.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.381 "hdgst": ${hdgst:-false}, 00:28:28.381 "ddgst": ${ddgst:-false} 00:28:28.381 }, 00:28:28.381 "method": "bdev_nvme_attach_controller" 00:28:28.381 } 00:28:28.381 EOF 00:28:28.381 )") 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:28.381 14:18:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.381 "params": { 00:28:28.381 "name": "Nvme1", 00:28:28.381 "trtype": "tcp", 00:28:28.381 "traddr": "10.0.0.2", 00:28:28.381 "adrfam": "ipv4", 00:28:28.381 "trsvcid": "4420", 00:28:28.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.381 "hdgst": false, 00:28:28.381 "ddgst": false 00:28:28.381 }, 00:28:28.381 "method": "bdev_nvme_attach_controller" 00:28:28.381 }' 00:28:28.381 [2024-12-05 14:18:34.614031] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:28.381 [2024-12-05 14:18:34.614092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907799 ] 00:28:28.642 [2024-12-05 14:18:34.703576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.642 [2024-12-05 14:18:34.757192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.903 Running I/O for 1 seconds... 00:28:29.843 8552.00 IOPS, 33.41 MiB/s 00:28:29.843 Latency(us) 00:28:29.843 [2024-12-05T13:18:36.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.843 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:29.843 Verification LBA range: start 0x0 length 0x4000 00:28:29.843 Nvme1n1 : 1.01 8597.53 33.58 0.00 0.00 14814.64 1897.81 16165.55 00:28:29.843 [2024-12-05T13:18:36.143Z] =================================================================================================================== 00:28:29.843 [2024-12-05T13:18:36.143Z] Total : 8597.53 33.58 0.00 0.00 14814.64 1897.81 16165.55 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2908121 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:30.103 { 00:28:30.103 "params": { 00:28:30.103 "name": "Nvme$subsystem", 00:28:30.103 "trtype": "$TEST_TRANSPORT", 00:28:30.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:30.103 "adrfam": "ipv4", 00:28:30.103 "trsvcid": "$NVMF_PORT", 00:28:30.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:30.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:30.103 "hdgst": ${hdgst:-false}, 00:28:30.103 "ddgst": ${ddgst:-false} 00:28:30.103 }, 00:28:30.103 "method": "bdev_nvme_attach_controller" 00:28:30.103 } 00:28:30.103 EOF 00:28:30.103 )") 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:30.103 14:18:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:30.103 "params": { 00:28:30.103 "name": "Nvme1", 00:28:30.103 "trtype": "tcp", 00:28:30.103 "traddr": "10.0.0.2", 00:28:30.103 "adrfam": "ipv4", 00:28:30.103 "trsvcid": "4420", 00:28:30.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:30.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:30.103 "hdgst": false, 00:28:30.103 "ddgst": false 00:28:30.103 }, 00:28:30.103 "method": "bdev_nvme_attach_controller" 00:28:30.103 }' 00:28:30.103 [2024-12-05 14:18:36.312133] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:30.103 [2024-12-05 14:18:36.312215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908121 ] 00:28:30.364 [2024-12-05 14:18:36.408464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.364 [2024-12-05 14:18:36.461075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.625 Running I/O for 15 seconds... 00:28:32.509 8857.00 IOPS, 34.60 MiB/s [2024-12-05T13:18:39.384Z] 10044.50 IOPS, 39.24 MiB/s [2024-12-05T13:18:39.384Z] 14:18:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2907744 00:28:33.084 14:18:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:33.084 [2024-12-05 14:18:39.264394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.084 [2024-12-05 14:18:39.264605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.084 [2024-12-05 14:18:39.264615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.264988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.264995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.085 [2024-12-05 14:18:39.265183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.085 [2024-12-05 14:18:39.265193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.086 [2024-12-05 14:18:39.265727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.086 [2024-12-05 14:18:39.265734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.265987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.265997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.087 [2024-12-05 14:18:39.266236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.087 [2024-12-05 14:18:39.266243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.088 [2024-12-05 14:18:39.266740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.266749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2261170 is same with the state(6) to be set 00:28:33.088 [2024-12-05 14:18:39.266758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.088 [2024-12-05 14:18:39.266764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.088 [2024-12-05 14:18:39.266771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:28:33.088 [2024-12-05 14:18:39.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.088 [2024-12-05 14:18:39.270464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.088 [2024-12-05 14:18:39.270516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.088 [2024-12-05 14:18:39.271308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.088 [2024-12-05 14:18:39.271326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.088 [2024-12-05 14:18:39.271334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.088 [2024-12-05 14:18:39.271559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.088 [2024-12-05 14:18:39.271779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.088 [2024-12-05 14:18:39.271788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.088 [2024-12-05 14:18:39.271797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.088 [2024-12-05 14:18:39.271806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.088 [2024-12-05 14:18:39.284690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.088 [2024-12-05 14:18:39.285284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.088 [2024-12-05 14:18:39.285323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.285334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.285583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.285806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.285816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.285824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.285833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.089 [2024-12-05 14:18:39.298532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.089 [2024-12-05 14:18:39.299161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.089 [2024-12-05 14:18:39.299200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.299211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.299450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.299683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.299692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.299700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.299708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.089 [2024-12-05 14:18:39.312417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.089 [2024-12-05 14:18:39.313053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.089 [2024-12-05 14:18:39.313093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.313105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.313344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.313575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.313585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.313593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.313601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.089 [2024-12-05 14:18:39.326301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.089 [2024-12-05 14:18:39.326966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.089 [2024-12-05 14:18:39.327008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.327019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.327258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.327489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.327499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.327507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.327515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.089 [2024-12-05 14:18:39.340202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.089 [2024-12-05 14:18:39.340838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.089 [2024-12-05 14:18:39.340881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.340892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.341132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.341355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.341364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.341372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.341380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.089 [2024-12-05 14:18:39.354088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.089 [2024-12-05 14:18:39.354794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.089 [2024-12-05 14:18:39.354840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.354857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.355100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.355323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.355332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.355340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.355348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.089 [2024-12-05 14:18:39.367864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.089 [2024-12-05 14:18:39.368467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.089 [2024-12-05 14:18:39.368491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.089 [2024-12-05 14:18:39.368499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.089 [2024-12-05 14:18:39.368718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.089 [2024-12-05 14:18:39.368938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.089 [2024-12-05 14:18:39.368947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.089 [2024-12-05 14:18:39.368954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.089 [2024-12-05 14:18:39.368961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.352 [2024-12-05 14:18:39.381643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.352 [2024-12-05 14:18:39.382231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.352 [2024-12-05 14:18:39.382251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.352 [2024-12-05 14:18:39.382259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.352 [2024-12-05 14:18:39.382485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.352 [2024-12-05 14:18:39.382706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.352 [2024-12-05 14:18:39.382715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.352 [2024-12-05 14:18:39.382722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.352 [2024-12-05 14:18:39.382729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.352 [2024-12-05 14:18:39.395616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.352 [2024-12-05 14:18:39.396204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.352 [2024-12-05 14:18:39.396225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.352 [2024-12-05 14:18:39.396233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.352 [2024-12-05 14:18:39.396452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.352 [2024-12-05 14:18:39.396684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.352 [2024-12-05 14:18:39.396695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.352 [2024-12-05 14:18:39.396702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.352 [2024-12-05 14:18:39.396709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.352 [2024-12-05 14:18:39.409588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.352 [2024-12-05 14:18:39.410149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.352 [2024-12-05 14:18:39.410172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.352 [2024-12-05 14:18:39.410180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.352 [2024-12-05 14:18:39.410398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.410626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.410636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.410643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.410651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.423378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.424106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.424119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.424373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.424614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.424624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.424632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.424642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.437346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.437942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.438004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.438017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.438271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.438518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.438530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.438545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.438554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.451253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.451949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.452012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.452025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.452279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.452519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.452529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.452537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.452546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.465047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.465785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.465846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.465859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.466112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.466337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.466347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.466355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.466364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.478874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.479619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.479682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.479694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.479948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.480173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.480183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.480191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.480200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.492720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.493443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.493515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.493528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.493783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.494008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.494017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.494025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.494034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.506536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.507238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.507313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.507581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.507808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.507817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.507825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.507834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.520360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.520969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.520998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.521009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.521233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.521465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.521476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.521484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.521491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.534298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.534933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.534960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.534978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.353 [2024-12-05 14:18:39.535200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.353 [2024-12-05 14:18:39.535421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.353 [2024-12-05 14:18:39.535433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.353 [2024-12-05 14:18:39.535442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.353 [2024-12-05 14:18:39.535450] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.353 [2024-12-05 14:18:39.548191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.353 [2024-12-05 14:18:39.548886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.353 [2024-12-05 14:18:39.548949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.353 [2024-12-05 14:18:39.548965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.549219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.549445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.549464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.549473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.549483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.561986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.562660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.562691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.562700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.562922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.563142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.563151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.563159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.563168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.575860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.576434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.576466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.576475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.576695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.576923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.576933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.576940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.576948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.589843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.590500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.590564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.590577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.590832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.591058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.591067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.591076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.591087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.603820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.604556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.604618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.604631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.604886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.605111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.605120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.605128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.605138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.617760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.618507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.618570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.618583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.618837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.619062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.619071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.619094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.619103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.631646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.632313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.632375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.632388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.632655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.632881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.632891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.632899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.632908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.354 [2024-12-05 14:18:39.645662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.354 [2024-12-05 14:18:39.646305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.354 [2024-12-05 14:18:39.646334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.354 [2024-12-05 14:18:39.646343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.354 [2024-12-05 14:18:39.646576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.354 [2024-12-05 14:18:39.646799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.354 [2024-12-05 14:18:39.646807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.354 [2024-12-05 14:18:39.646815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.354 [2024-12-05 14:18:39.646822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.618 [2024-12-05 14:18:39.659517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.618 [2024-12-05 14:18:39.660088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-05 14:18:39.660112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.618 [2024-12-05 14:18:39.660120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.618 [2024-12-05 14:18:39.660341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.618 [2024-12-05 14:18:39.660570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.618 [2024-12-05 14:18:39.660582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.618 [2024-12-05 14:18:39.660589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.618 [2024-12-05 14:18:39.660597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.618 [2024-12-05 14:18:39.673325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.618 [2024-12-05 14:18:39.673927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-05 14:18:39.673951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.618 [2024-12-05 14:18:39.673960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.618 [2024-12-05 14:18:39.674181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.618 [2024-12-05 14:18:39.674401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.618 [2024-12-05 14:18:39.674410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.618 [2024-12-05 14:18:39.674417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.618 [2024-12-05 14:18:39.674424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.618 [2024-12-05 14:18:39.687291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.618 [2024-12-05 14:18:39.687995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-05 14:18:39.688056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.618 [2024-12-05 14:18:39.688069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.618 [2024-12-05 14:18:39.688322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.618 [2024-12-05 14:18:39.688564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.618 [2024-12-05 14:18:39.688575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.618 [2024-12-05 14:18:39.688584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.618 [2024-12-05 14:18:39.688593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.618 [2024-12-05 14:18:39.701123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.618 [2024-12-05 14:18:39.701808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.618 [2024-12-05 14:18:39.701870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.618 [2024-12-05 14:18:39.701883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.618 [2024-12-05 14:18:39.702137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.618 [2024-12-05 14:18:39.702362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.618 [2024-12-05 14:18:39.702372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.618 [2024-12-05 14:18:39.702380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.702389] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.715117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.715799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.715861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.715881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.716135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.716360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.716370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.716378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.716387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.728944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.729604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.729666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.729680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.729933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.730159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.730168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.730176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.730185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.742907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.743596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.743657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.743670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.743924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.744149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.744160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.744170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.744180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.756704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.757378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.757440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.757453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.757720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.757953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.757963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.757971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.757981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.770694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.771383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.771445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.771471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.771726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.771952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.771961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.771969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.771978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.784495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.785203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.785266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.785278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.785544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.785770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.785780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.785788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.785797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.798313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.799012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.799074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.799087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.799341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.799580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.799591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.799606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.799615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 8365.00 IOPS, 32.68 MiB/s [2024-12-05T13:18:39.919Z] [2024-12-05 14:18:39.813167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.813878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.813941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.813954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.814207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.814433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.814442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.814451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.814477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.827030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.827632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.827663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.827672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.827893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.828114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.828123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.828131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.619 [2024-12-05 14:18:39.828140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.619 [2024-12-05 14:18:39.839685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.619 [2024-12-05 14:18:39.840218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.619 [2024-12-05 14:18:39.840240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.619 [2024-12-05 14:18:39.840246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.619 [2024-12-05 14:18:39.840400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.619 [2024-12-05 14:18:39.840563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.619 [2024-12-05 14:18:39.840570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.619 [2024-12-05 14:18:39.840576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.620 [2024-12-05 14:18:39.840582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.620 [2024-12-05 14:18:39.852291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.620 [2024-12-05 14:18:39.852789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-05 14:18:39.852809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.620 [2024-12-05 14:18:39.852815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.620 [2024-12-05 14:18:39.852966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.620 [2024-12-05 14:18:39.853118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.620 [2024-12-05 14:18:39.853124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.620 [2024-12-05 14:18:39.853129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.620 [2024-12-05 14:18:39.853134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.620 [2024-12-05 14:18:39.864975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.620 [2024-12-05 14:18:39.865560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-05 14:18:39.865609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.620 [2024-12-05 14:18:39.865618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.620 [2024-12-05 14:18:39.865796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.620 [2024-12-05 14:18:39.865951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.620 [2024-12-05 14:18:39.865958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.620 [2024-12-05 14:18:39.865964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.620 [2024-12-05 14:18:39.865970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.620 [2024-12-05 14:18:39.877684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.620 [2024-12-05 14:18:39.878279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-05 14:18:39.878323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.620 [2024-12-05 14:18:39.878332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.620 [2024-12-05 14:18:39.878515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.620 [2024-12-05 14:18:39.878670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.620 [2024-12-05 14:18:39.878677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.620 [2024-12-05 14:18:39.878683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.620 [2024-12-05 14:18:39.878689] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.620 [2024-12-05 14:18:39.890391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.620 [2024-12-05 14:18:39.890980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-05 14:18:39.891020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.620 [2024-12-05 14:18:39.891033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.620 [2024-12-05 14:18:39.891206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.620 [2024-12-05 14:18:39.891360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.620 [2024-12-05 14:18:39.891366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.620 [2024-12-05 14:18:39.891372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.620 [2024-12-05 14:18:39.891379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.620 [2024-12-05 14:18:39.903076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.620 [2024-12-05 14:18:39.903587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.620 [2024-12-05 14:18:39.903625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.620 [2024-12-05 14:18:39.903634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.620 [2024-12-05 14:18:39.903808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.620 [2024-12-05 14:18:39.903962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.620 [2024-12-05 14:18:39.903968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.620 [2024-12-05 14:18:39.903973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.620 [2024-12-05 14:18:39.903979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.882 [2024-12-05 14:18:39.915688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.882 [2024-12-05 14:18:39.916275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-12-05 14:18:39.916312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.882 [2024-12-05 14:18:39.916321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.882 [2024-12-05 14:18:39.916498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.882 [2024-12-05 14:18:39.916652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.882 [2024-12-05 14:18:39.916659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.882 [2024-12-05 14:18:39.916665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.882 [2024-12-05 14:18:39.916671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.882 [2024-12-05 14:18:39.928381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.882 [2024-12-05 14:18:39.928878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.882 [2024-12-05 14:18:39.928895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.882 [2024-12-05 14:18:39.928901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.882 [2024-12-05 14:18:39.929052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:39.929208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:39.929214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:39.929219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:39.929224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:39.941050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:39.941546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:39.941561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:39.941566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:39.941716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:39.941866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:39.941872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:39.941877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:39.941883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:39.953698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:39.954194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:39.954207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:39.954212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:39.954361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:39.954518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:39.954525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:39.954530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:39.954534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:39.966351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:39.966923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:39.966955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:39.966963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:39.967129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:39.967283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:39.967289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:39.967299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:39.967305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:39.978996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:39.979554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:39.979586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:39.979595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:39.979765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:39.979917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:39.979924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:39.979929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:39.979935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:39.991624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:39.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:39.991948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:39.991954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:39.992104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:39.992254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:39.992260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:39.992265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:39.992270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:40.005098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:40.005608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:40.005623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:40.005629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:40.005781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:40.005930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:40.005936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:40.005941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:40.005946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:40.017805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:40.018269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:40.018299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:40.018308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:40.018483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:40.018637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:40.018643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:40.018648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:40.018654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:40.030542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:40.031024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:40.031040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:40.031046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.883 [2024-12-05 14:18:40.031198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.883 [2024-12-05 14:18:40.031349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.883 [2024-12-05 14:18:40.031355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.883 [2024-12-05 14:18:40.031360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.883 [2024-12-05 14:18:40.031365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.883 [2024-12-05 14:18:40.043215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.883 [2024-12-05 14:18:40.043774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.883 [2024-12-05 14:18:40.043805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.883 [2024-12-05 14:18:40.043814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.043980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.044133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.044140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.044150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.044157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.055857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.056330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.056345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.056354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.056510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.056660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.056667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.056672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.056677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.068527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.069134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.069164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.069173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.069340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.069498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.069505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.069511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.069517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.081204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.081696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.081712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.081717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.081868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.082018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.082024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.082030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.082035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.093873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.094363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.094376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.094382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.094537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.094692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.094698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.094703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.094707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.106533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.106983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.106995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.107001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.107150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.107300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.107305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.107310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.107314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.119147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.119751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.119782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.119790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.119956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.120109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.120115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.120120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.120126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.131824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.132392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.132422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.132431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.132605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.132758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.132765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.132774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.132780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.144463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.145058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.145089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.145098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.884 [2024-12-05 14:18:40.145266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.884 [2024-12-05 14:18:40.145419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.884 [2024-12-05 14:18:40.145425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.884 [2024-12-05 14:18:40.145430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.884 [2024-12-05 14:18:40.145436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.884 [2024-12-05 14:18:40.157144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.884 [2024-12-05 14:18:40.157519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.884 [2024-12-05 14:18:40.157534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.884 [2024-12-05 14:18:40.157540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.885 [2024-12-05 14:18:40.157690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.885 [2024-12-05 14:18:40.157840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.885 [2024-12-05 14:18:40.157846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.885 [2024-12-05 14:18:40.157851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.885 [2024-12-05 14:18:40.157855] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:33.885 [2024-12-05 14:18:40.169822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:33.885 [2024-12-05 14:18:40.170371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.885 [2024-12-05 14:18:40.170401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:33.885 [2024-12-05 14:18:40.170410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:33.885 [2024-12-05 14:18:40.170584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:33.885 [2024-12-05 14:18:40.170738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:33.885 [2024-12-05 14:18:40.170745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:33.885 [2024-12-05 14:18:40.170750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:33.885 [2024-12-05 14:18:40.170756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.182443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.182901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.182916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.182922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.183072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.183222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.147 [2024-12-05 14:18:40.183231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.147 [2024-12-05 14:18:40.183239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.147 [2024-12-05 14:18:40.183244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.195065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.195442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.195459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.195465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.195615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.195765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.147 [2024-12-05 14:18:40.195770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.147 [2024-12-05 14:18:40.195775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.147 [2024-12-05 14:18:40.195780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.207738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.208186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.208198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.208204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.208353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.208508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.147 [2024-12-05 14:18:40.208514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.147 [2024-12-05 14:18:40.208519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.147 [2024-12-05 14:18:40.208524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.220348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.220909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.220939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.220952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.221120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.221273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.147 [2024-12-05 14:18:40.221279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.147 [2024-12-05 14:18:40.221284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.147 [2024-12-05 14:18:40.221290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.232980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.233553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.233583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.233591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.233759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.233911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.147 [2024-12-05 14:18:40.233917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.147 [2024-12-05 14:18:40.233923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.147 [2024-12-05 14:18:40.233929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.245619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.246117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.246132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.246138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.246287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.246437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.147 [2024-12-05 14:18:40.246442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.147 [2024-12-05 14:18:40.246447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.147 [2024-12-05 14:18:40.246452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.147 [2024-12-05 14:18:40.258282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.147 [2024-12-05 14:18:40.258738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.147 [2024-12-05 14:18:40.258750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.147 [2024-12-05 14:18:40.258756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.147 [2024-12-05 14:18:40.258905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.147 [2024-12-05 14:18:40.259062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.259067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.259072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.259077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.270938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.271504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.271534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.271542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.271709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.271863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.271871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.271876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.271881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.283576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.284046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.284060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.284066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.284216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.284366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.284373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.284377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.284382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.296204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.296583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.296614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.296623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.296791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.296944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.296952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.296961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.296967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.308905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.309406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.309421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.309427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.309581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.309732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.309737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.309742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.309747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.321581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.322048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.322079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.322087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.322255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.322408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.322415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.322422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.322430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.334266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.334794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.334809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.334815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.334965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.335115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.335121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.335126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.335130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.346958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.347407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.347419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.347425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.347577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.347728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.347733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.347738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.347743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.359555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.360012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.360024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.360029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.360178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.360328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.360334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.360339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.360344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.372165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.372779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.372809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.372817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.372983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.148 [2024-12-05 14:18:40.373135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.148 [2024-12-05 14:18:40.373141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.148 [2024-12-05 14:18:40.373147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.148 [2024-12-05 14:18:40.373153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.148 [2024-12-05 14:18:40.384831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.148 [2024-12-05 14:18:40.385286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.148 [2024-12-05 14:18:40.385301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.148 [2024-12-05 14:18:40.385310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.148 [2024-12-05 14:18:40.385465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.149 [2024-12-05 14:18:40.385616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.149 [2024-12-05 14:18:40.385622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.149 [2024-12-05 14:18:40.385627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.149 [2024-12-05 14:18:40.385632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.149 [2024-12-05 14:18:40.397446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.149 [2024-12-05 14:18:40.398050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.149 [2024-12-05 14:18:40.398080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.149 [2024-12-05 14:18:40.398089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.149 [2024-12-05 14:18:40.398254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.149 [2024-12-05 14:18:40.398407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.149 [2024-12-05 14:18:40.398413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.149 [2024-12-05 14:18:40.398418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.149 [2024-12-05 14:18:40.398424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.149 [2024-12-05 14:18:40.410115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.149 [2024-12-05 14:18:40.410495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.149 [2024-12-05 14:18:40.410515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.149 [2024-12-05 14:18:40.410521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.149 [2024-12-05 14:18:40.410676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.149 [2024-12-05 14:18:40.410827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.149 [2024-12-05 14:18:40.410833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.149 [2024-12-05 14:18:40.410839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.149 [2024-12-05 14:18:40.410844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.149 [2024-12-05 14:18:40.422823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.149 [2024-12-05 14:18:40.423410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.149 [2024-12-05 14:18:40.423440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.149 [2024-12-05 14:18:40.423449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.149 [2024-12-05 14:18:40.423621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.149 [2024-12-05 14:18:40.423778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.149 [2024-12-05 14:18:40.423784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.149 [2024-12-05 14:18:40.423790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.149 [2024-12-05 14:18:40.423796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.149 [2024-12-05 14:18:40.435481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.149 [2024-12-05 14:18:40.435907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.149 [2024-12-05 14:18:40.435922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.149 [2024-12-05 14:18:40.435927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.149 [2024-12-05 14:18:40.436078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.149 [2024-12-05 14:18:40.436228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.149 [2024-12-05 14:18:40.436234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.149 [2024-12-05 14:18:40.436239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.149 [2024-12-05 14:18:40.436244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.411 [2024-12-05 14:18:40.448208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.411 [2024-12-05 14:18:40.448714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-12-05 14:18:40.448745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.411 [2024-12-05 14:18:40.448754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.411 [2024-12-05 14:18:40.448919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.411 [2024-12-05 14:18:40.449072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.411 [2024-12-05 14:18:40.449078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.411 [2024-12-05 14:18:40.449083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.411 [2024-12-05 14:18:40.449089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.411 [2024-12-05 14:18:40.460923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.411 [2024-12-05 14:18:40.461409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-12-05 14:18:40.461424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.411 [2024-12-05 14:18:40.461430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.411 [2024-12-05 14:18:40.461584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.411 [2024-12-05 14:18:40.461735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.411 [2024-12-05 14:18:40.461741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.411 [2024-12-05 14:18:40.461750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.411 [2024-12-05 14:18:40.461755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.411 [2024-12-05 14:18:40.473568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.411 [2024-12-05 14:18:40.474134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.411 [2024-12-05 14:18:40.474164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.411 [2024-12-05 14:18:40.474173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.411 [2024-12-05 14:18:40.474341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.411 [2024-12-05 14:18:40.474499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.411 [2024-12-05 14:18:40.474506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.474512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.474517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.486199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.486684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.486699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.486705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.486854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.487004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.487010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.487015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.487020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.498841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.499310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.499323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.499328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.499483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.499633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.499639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.499644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.499649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.511466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.512050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.512080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.512088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.512253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.512406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.512412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.512418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.512423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.524124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.524765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.524795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.524804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.524969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.525122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.525128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.525133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.525139] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.536829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.537489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.537519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.537529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.537697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.537850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.537855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.537861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.537867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.549559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.549915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.549930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.549940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.550091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.550241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.550247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.550252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.550257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.562222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.562668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.562697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.562706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.562873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.563026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.563032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.563037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.563043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.412 [2024-12-05 14:18:40.574873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.412 [2024-12-05 14:18:40.575383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.412 [2024-12-05 14:18:40.575398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.412 [2024-12-05 14:18:40.575403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.412 [2024-12-05 14:18:40.575558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.412 [2024-12-05 14:18:40.575709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.412 [2024-12-05 14:18:40.575714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.412 [2024-12-05 14:18:40.575719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.412 [2024-12-05 14:18:40.575724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.587543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.588011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.588024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.588030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.588179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.588333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.588339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.588344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.588349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.600171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.600761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.600791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.600800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.600965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.601118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.601124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.601130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.601135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.612858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.613411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.613441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.613450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.613621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.613774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.613781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.613786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.613792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.625498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.626063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.626093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.626102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.626267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.626420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.626426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.626435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.626441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.638128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.638602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.638617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.638623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.638773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.638924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.638930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.638935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.638940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.650765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.651338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.651368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.651376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.651547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.651700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.651707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.651713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.651718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.663402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.663932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.663937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.664087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.664237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.664242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.664247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.664252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.676076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.676561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.676575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.676580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.676729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.676879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.676885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.676889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.676894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.688717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.413 [2024-12-05 14:18:40.689051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.413 [2024-12-05 14:18:40.689066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.413 [2024-12-05 14:18:40.689072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.413 [2024-12-05 14:18:40.689223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.413 [2024-12-05 14:18:40.689373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.413 [2024-12-05 14:18:40.689378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.413 [2024-12-05 14:18:40.689383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.413 [2024-12-05 14:18:40.689388] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.413 [2024-12-05 14:18:40.701355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.414 [2024-12-05 14:18:40.701927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.414 [2024-12-05 14:18:40.701957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.414 [2024-12-05 14:18:40.701966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.414 [2024-12-05 14:18:40.702131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.414 [2024-12-05 14:18:40.702284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.414 [2024-12-05 14:18:40.702290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.414 [2024-12-05 14:18:40.702295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.414 [2024-12-05 14:18:40.702301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.714082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.676 [2024-12-05 14:18:40.714580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.676 [2024-12-05 14:18:40.714595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.676 [2024-12-05 14:18:40.714604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.676 [2024-12-05 14:18:40.714754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.676 [2024-12-05 14:18:40.714904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.676 [2024-12-05 14:18:40.714910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.676 [2024-12-05 14:18:40.714915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.676 [2024-12-05 14:18:40.714919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.726753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.676 [2024-12-05 14:18:40.727233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.676 [2024-12-05 14:18:40.727247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.676 [2024-12-05 14:18:40.727252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.676 [2024-12-05 14:18:40.727401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.676 [2024-12-05 14:18:40.727555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.676 [2024-12-05 14:18:40.727562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.676 [2024-12-05 14:18:40.727567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.676 [2024-12-05 14:18:40.727572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.739381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.676 [2024-12-05 14:18:40.739968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.676 [2024-12-05 14:18:40.739998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.676 [2024-12-05 14:18:40.740007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.676 [2024-12-05 14:18:40.740172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.676 [2024-12-05 14:18:40.740325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.676 [2024-12-05 14:18:40.740330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.676 [2024-12-05 14:18:40.740336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.676 [2024-12-05 14:18:40.740342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.752023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.676 [2024-12-05 14:18:40.752579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.676 [2024-12-05 14:18:40.752609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.676 [2024-12-05 14:18:40.752618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.676 [2024-12-05 14:18:40.752786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.676 [2024-12-05 14:18:40.752942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.676 [2024-12-05 14:18:40.752948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.676 [2024-12-05 14:18:40.752954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.676 [2024-12-05 14:18:40.752960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.764638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.676 [2024-12-05 14:18:40.765115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.676 [2024-12-05 14:18:40.765145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.676 [2024-12-05 14:18:40.765153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.676 [2024-12-05 14:18:40.765321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.676 [2024-12-05 14:18:40.765480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.676 [2024-12-05 14:18:40.765488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.676 [2024-12-05 14:18:40.765493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.676 [2024-12-05 14:18:40.765498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.777313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.676 [2024-12-05 14:18:40.777803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.676 [2024-12-05 14:18:40.777833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.676 [2024-12-05 14:18:40.777842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.676 [2024-12-05 14:18:40.778007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.676 [2024-12-05 14:18:40.778160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.676 [2024-12-05 14:18:40.778166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.676 [2024-12-05 14:18:40.778172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.676 [2024-12-05 14:18:40.778177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.676 [2024-12-05 14:18:40.789999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.790564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.790595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.790603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.790771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.790925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.790931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.790941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.790946] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.802632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.803122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.803137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.803142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.803292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.803442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.803448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.803459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.803464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 6273.75 IOPS, 24.51 MiB/s [2024-12-05T13:18:40.977Z] [2024-12-05 14:18:40.815275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.815862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.815893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.815901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.816067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.816220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.816226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.816231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.816237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.827935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.828529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.828559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.828568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.828736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.828888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.828895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.828900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.828906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.840590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.841089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.841103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.841108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.841258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.841408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.841413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.841418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.841423] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.853238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.853792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.853823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.853832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.853997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.854149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.854156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.854161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.854166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.865848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.866426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.866461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.866470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.866635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.866788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.866794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.866799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.866804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.878481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.677 [2024-12-05 14:18:40.878941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.677 [2024-12-05 14:18:40.878971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.677 [2024-12-05 14:18:40.878986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.677 [2024-12-05 14:18:40.879151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.677 [2024-12-05 14:18:40.879304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.677 [2024-12-05 14:18:40.879310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.677 [2024-12-05 14:18:40.879315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.677 [2024-12-05 14:18:40.879321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.677 [2024-12-05 14:18:40.891144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.891740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.891769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.891778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.891943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.892095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.892102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.892107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.892113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.678 [2024-12-05 14:18:40.903789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.904356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.904386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.904394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.904567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.904720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.904726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.904732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.904738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.678 [2024-12-05 14:18:40.916412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.916962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.916992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.917000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.917166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.917322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.917328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.917334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.917339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.678 [2024-12-05 14:18:40.929041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.929547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.929577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.929586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.929754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.929906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.929912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.929917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.929923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.678 [2024-12-05 14:18:40.941745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.942322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.942352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.942361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.942533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.942686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.942692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.942698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.942703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.678 [2024-12-05 14:18:40.954377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.954926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.954956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.954964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.955129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.955282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.955288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.955297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.955302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.678 [2024-12-05 14:18:40.966981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.678 [2024-12-05 14:18:40.967550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.678 [2024-12-05 14:18:40.967580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.678 [2024-12-05 14:18:40.967589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.678 [2024-12-05 14:18:40.967755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.678 [2024-12-05 14:18:40.967907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.678 [2024-12-05 14:18:40.967913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.678 [2024-12-05 14:18:40.967919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.678 [2024-12-05 14:18:40.967924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:40.979607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:40.980178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:40.980208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.940 [2024-12-05 14:18:40.980217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.940 [2024-12-05 14:18:40.980383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.940 [2024-12-05 14:18:40.980543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.940 [2024-12-05 14:18:40.980550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.940 [2024-12-05 14:18:40.980556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.940 [2024-12-05 14:18:40.980562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:40.992237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:40.992825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:40.992855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.940 [2024-12-05 14:18:40.992864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.940 [2024-12-05 14:18:40.993029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.940 [2024-12-05 14:18:40.993182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.940 [2024-12-05 14:18:40.993188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.940 [2024-12-05 14:18:40.993193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.940 [2024-12-05 14:18:40.993198] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:41.004880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:41.005345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:41.005360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.940 [2024-12-05 14:18:41.005365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.940 [2024-12-05 14:18:41.005521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.940 [2024-12-05 14:18:41.005671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.940 [2024-12-05 14:18:41.005677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.940 [2024-12-05 14:18:41.005682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.940 [2024-12-05 14:18:41.005687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:41.017496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:41.017962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:41.017991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.940 [2024-12-05 14:18:41.018000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.940 [2024-12-05 14:18:41.018165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.940 [2024-12-05 14:18:41.018318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.940 [2024-12-05 14:18:41.018325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.940 [2024-12-05 14:18:41.018330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.940 [2024-12-05 14:18:41.018336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:41.030174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:41.030681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:41.030712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.940 [2024-12-05 14:18:41.030721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.940 [2024-12-05 14:18:41.030889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.940 [2024-12-05 14:18:41.031041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.940 [2024-12-05 14:18:41.031047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.940 [2024-12-05 14:18:41.031053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.940 [2024-12-05 14:18:41.031058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:41.042885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:41.043474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:41.043504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.940 [2024-12-05 14:18:41.043517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.940 [2024-12-05 14:18:41.043685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.940 [2024-12-05 14:18:41.043838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.940 [2024-12-05 14:18:41.043845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.940 [2024-12-05 14:18:41.043851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.940 [2024-12-05 14:18:41.043858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.940 [2024-12-05 14:18:41.055543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.940 [2024-12-05 14:18:41.056141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.940 [2024-12-05 14:18:41.056171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.056180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.056346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.056506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.056513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.056518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.056525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.068199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.068794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.068824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.068833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.068998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.069151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.069157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.069162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.069168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.080847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.081415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.081444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.081453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.081625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.081782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.081788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.081794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.081799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.093473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.093951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.093980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.093989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.094154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.094307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.094313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.094318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.094324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.106148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.106750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.106781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.106789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.106955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.107107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.107113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.107118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.107124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.118803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.119375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.119404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.119413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.119585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.119738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.119744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.119754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.119759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.131456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.132036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.132065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.132074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.132239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.132391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.132398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.132403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.941 [2024-12-05 14:18:41.132409] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.941 [2024-12-05 14:18:41.144086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.941 [2024-12-05 14:18:41.144580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.941 [2024-12-05 14:18:41.144609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.941 [2024-12-05 14:18:41.144618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.941 [2024-12-05 14:18:41.144786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.941 [2024-12-05 14:18:41.144938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.941 [2024-12-05 14:18:41.144944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.941 [2024-12-05 14:18:41.144949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.144955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.156782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.157291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.157306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.157311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.157468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.157619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.157625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.157630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.157634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.169443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.170040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.170070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.170079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.170244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.170397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.170403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.170408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.170414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.182097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.182685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.182715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.182724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.182889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.183041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.183048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.183053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.183059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.194735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.195348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.195377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.195386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.195558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.195712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.195718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.195724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.195729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.207418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.208015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.208045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.208057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.208223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.208375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.208381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.208386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.208392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.220079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.220570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.220576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.220727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.220876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.220882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.220887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.220892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:34.942 [2024-12-05 14:18:41.232724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:34.942 [2024-12-05 14:18:41.233314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.942 [2024-12-05 14:18:41.233344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:34.942 [2024-12-05 14:18:41.233353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:34.942 [2024-12-05 14:18:41.233526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:34.942 [2024-12-05 14:18:41.233680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:34.942 [2024-12-05 14:18:41.233686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:34.942 [2024-12-05 14:18:41.233691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:34.942 [2024-12-05 14:18:41.233697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.206 [2024-12-05 14:18:41.245383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.206 [2024-12-05 14:18:41.245965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.206 [2024-12-05 14:18:41.245996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.206 [2024-12-05 14:18:41.246004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.206 [2024-12-05 14:18:41.246170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.206 [2024-12-05 14:18:41.246326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.206 [2024-12-05 14:18:41.246333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.206 [2024-12-05 14:18:41.246338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.206 [2024-12-05 14:18:41.246343] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.206 [2024-12-05 14:18:41.258025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.206 [2024-12-05 14:18:41.258639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.206 [2024-12-05 14:18:41.258669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.206 [2024-12-05 14:18:41.258678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.206 [2024-12-05 14:18:41.258843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.206 [2024-12-05 14:18:41.258996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.206 [2024-12-05 14:18:41.259001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.206 [2024-12-05 14:18:41.259007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.206 [2024-12-05 14:18:41.259013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.206 [2024-12-05 14:18:41.270697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.206 [2024-12-05 14:18:41.271267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.206 [2024-12-05 14:18:41.271297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.206 [2024-12-05 14:18:41.271305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.206 [2024-12-05 14:18:41.271478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.206 [2024-12-05 14:18:41.271631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.206 [2024-12-05 14:18:41.271637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.206 [2024-12-05 14:18:41.271642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.206 [2024-12-05 14:18:41.271648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.206 [2024-12-05 14:18:41.283322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.206 [2024-12-05 14:18:41.283903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.206 [2024-12-05 14:18:41.283933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.206 [2024-12-05 14:18:41.283942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.206 [2024-12-05 14:18:41.284107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.206 [2024-12-05 14:18:41.284260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.284266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.284275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.284280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.295961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.296564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.296594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.296603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.296771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.296924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.296930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.296936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.296942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.308641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.309214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.309244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.309253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.309419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.309580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.309588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.309594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.309600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.321283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.321839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.321869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.321877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.322043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.322196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.322202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.322207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.322213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.334004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.334501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.334517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.334523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.334673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.334823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.334828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.334833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.334839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.346647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.347215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.347245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.347254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.347419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.347578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.347585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.347591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.347596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.359278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.359860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.359890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.359899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.360064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.360217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.360223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.360228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.360234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.207 [2024-12-05 14:18:41.371936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.207 [2024-12-05 14:18:41.372371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.207 [2024-12-05 14:18:41.372386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.207 [2024-12-05 14:18:41.372395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.207 [2024-12-05 14:18:41.372550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.207 [2024-12-05 14:18:41.372701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.207 [2024-12-05 14:18:41.372707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.207 [2024-12-05 14:18:41.372712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.207 [2024-12-05 14:18:41.372717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.384540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.384856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.384870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.384875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.385026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.385175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.385181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.385186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.385190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.397152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.397721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.397751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.397760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.397925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.398078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.398084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.398089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.398095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.409777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.410270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.410284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.410290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.410440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.410609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.410616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.410621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.410626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.422448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.423023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.423053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.423061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.423226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.423379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.423385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.423391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.423396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.435076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.435652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.435681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.435690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.435856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.436008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.436014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.436020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.436025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.447708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.448194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.448208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.448214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.448364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.448519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.448526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.448534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.448539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.460347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.460872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.460902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.460910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.461076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.208 [2024-12-05 14:18:41.461229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.208 [2024-12-05 14:18:41.461235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.208 [2024-12-05 14:18:41.461240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.208 [2024-12-05 14:18:41.461246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.208 [2024-12-05 14:18:41.473069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.208 [2024-12-05 14:18:41.473549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.208 [2024-12-05 14:18:41.473566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.208 [2024-12-05 14:18:41.473571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.208 [2024-12-05 14:18:41.473721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.209 [2024-12-05 14:18:41.473871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.209 [2024-12-05 14:18:41.473877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.209 [2024-12-05 14:18:41.473882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.209 [2024-12-05 14:18:41.473887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.209 [2024-12-05 14:18:41.485746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.209 [2024-12-05 14:18:41.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.209 [2024-12-05 14:18:41.486205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.209 [2024-12-05 14:18:41.486210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.209 [2024-12-05 14:18:41.486360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.209 [2024-12-05 14:18:41.486514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.209 [2024-12-05 14:18:41.486520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.209 [2024-12-05 14:18:41.486525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.209 [2024-12-05 14:18:41.486530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.209 [2024-12-05 14:18:41.498363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.209 [2024-12-05 14:18:41.498827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.209 [2024-12-05 14:18:41.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.209 [2024-12-05 14:18:41.498845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.209 [2024-12-05 14:18:41.498995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.209 [2024-12-05 14:18:41.499145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.209 [2024-12-05 14:18:41.499151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.209 [2024-12-05 14:18:41.499156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.209 [2024-12-05 14:18:41.499160] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.510992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.511433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.473 [2024-12-05 14:18:41.511446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.473 [2024-12-05 14:18:41.511451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.473 [2024-12-05 14:18:41.511606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.473 [2024-12-05 14:18:41.511756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.473 [2024-12-05 14:18:41.511762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.473 [2024-12-05 14:18:41.511767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.473 [2024-12-05 14:18:41.511772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.523605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.524172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.473 [2024-12-05 14:18:41.524202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.473 [2024-12-05 14:18:41.524210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.473 [2024-12-05 14:18:41.524376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.473 [2024-12-05 14:18:41.524537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.473 [2024-12-05 14:18:41.524544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.473 [2024-12-05 14:18:41.524549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.473 [2024-12-05 14:18:41.524555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.536240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.536799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.473 [2024-12-05 14:18:41.536829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.473 [2024-12-05 14:18:41.536841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.473 [2024-12-05 14:18:41.537006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.473 [2024-12-05 14:18:41.537159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.473 [2024-12-05 14:18:41.537165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.473 [2024-12-05 14:18:41.537170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.473 [2024-12-05 14:18:41.537176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.548868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.549466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.473 [2024-12-05 14:18:41.549495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.473 [2024-12-05 14:18:41.549504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.473 [2024-12-05 14:18:41.549672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.473 [2024-12-05 14:18:41.549824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.473 [2024-12-05 14:18:41.549831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.473 [2024-12-05 14:18:41.549837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.473 [2024-12-05 14:18:41.549843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.561542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.561933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.473 [2024-12-05 14:18:41.561963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.473 [2024-12-05 14:18:41.561972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.473 [2024-12-05 14:18:41.562138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.473 [2024-12-05 14:18:41.562291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.473 [2024-12-05 14:18:41.562297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.473 [2024-12-05 14:18:41.562303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.473 [2024-12-05 14:18:41.562309] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.574150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.574621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.473 [2024-12-05 14:18:41.574637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.473 [2024-12-05 14:18:41.574642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.473 [2024-12-05 14:18:41.574792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.473 [2024-12-05 14:18:41.574946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.473 [2024-12-05 14:18:41.574952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.473 [2024-12-05 14:18:41.574957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.473 [2024-12-05 14:18:41.574962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.473 [2024-12-05 14:18:41.586781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.473 [2024-12-05 14:18:41.587348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.587378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.587387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.587560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.587713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.587719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.587724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.587730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.599417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.600009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.600039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.600048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.600213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.600366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.600372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.600377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.600383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.612070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.612560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.612590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.612599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.612767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.612919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.612925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.612935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.612941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.624778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.625330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.625360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.625369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.625541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.625694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.625700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.625706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.625711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.637381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.637965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.637995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.638003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.638168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.638321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.638327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.638332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.638338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.650020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.650661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.650691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.650700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.650865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.651017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.651023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.651029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.651034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.662725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.663342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.663372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.663381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.663554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.474 [2024-12-05 14:18:41.663707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.474 [2024-12-05 14:18:41.663713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.474 [2024-12-05 14:18:41.663719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.474 [2024-12-05 14:18:41.663724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.474 [2024-12-05 14:18:41.675420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.474 [2024-12-05 14:18:41.676006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.474 [2024-12-05 14:18:41.676036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.474 [2024-12-05 14:18:41.676044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.474 [2024-12-05 14:18:41.676209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.676362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.676368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.676374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.676379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.688062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.688657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.688687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.688696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.688862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.689014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.689020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.689026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.689031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.700710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.701285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.701315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.701327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.701501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.701654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.701661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.701666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.701672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.713363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.713922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.713952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.713961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.714126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.714279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.714285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.714290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.714296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.726014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.726658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.726688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.726696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.726862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.727014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.727021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.727026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.727032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.738722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.739296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.739326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.739335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.739507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.739664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.739670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.739676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.739683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.751369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.751875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.751890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.751896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.752046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.752196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.752201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.752207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.752211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.475 [2024-12-05 14:18:41.764039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.475 [2024-12-05 14:18:41.764486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.475 [2024-12-05 14:18:41.764500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.475 [2024-12-05 14:18:41.764505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.475 [2024-12-05 14:18:41.764655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.475 [2024-12-05 14:18:41.764805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.475 [2024-12-05 14:18:41.764810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.475 [2024-12-05 14:18:41.764815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.475 [2024-12-05 14:18:41.764820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.740 [2024-12-05 14:18:41.776647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.740 [2024-12-05 14:18:41.777144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.740 [2024-12-05 14:18:41.777156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.740 [2024-12-05 14:18:41.777161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.740 [2024-12-05 14:18:41.777311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.740 [2024-12-05 14:18:41.777466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.740 [2024-12-05 14:18:41.777473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.740 [2024-12-05 14:18:41.777484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.740 [2024-12-05 14:18:41.777490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.740 [2024-12-05 14:18:41.789311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.740 [2024-12-05 14:18:41.789782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.740 [2024-12-05 14:18:41.789795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.740 [2024-12-05 14:18:41.789801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.740 [2024-12-05 14:18:41.789950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.740 [2024-12-05 14:18:41.790101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.740 [2024-12-05 14:18:41.790106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.740 [2024-12-05 14:18:41.790111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.740 [2024-12-05 14:18:41.790116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.740 [2024-12-05 14:18:41.801942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.740 [2024-12-05 14:18:41.802401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.740 [2024-12-05 14:18:41.802414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.740 [2024-12-05 14:18:41.802419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.740 [2024-12-05 14:18:41.802573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.740 [2024-12-05 14:18:41.802723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.740 [2024-12-05 14:18:41.802728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.740 [2024-12-05 14:18:41.802733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.740 [2024-12-05 14:18:41.802738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.740 5019.00 IOPS, 19.61 MiB/s [2024-12-05T13:18:42.040Z] [2024-12-05 14:18:41.815159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.740 [2024-12-05 14:18:41.815577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.740 [2024-12-05 14:18:41.815591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.740 [2024-12-05 14:18:41.815596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.740 [2024-12-05 14:18:41.815746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.740 [2024-12-05 14:18:41.815895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.740 [2024-12-05 14:18:41.815901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.740 [2024-12-05 14:18:41.815906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.740 [2024-12-05 14:18:41.815910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.740 [2024-12-05 14:18:41.827755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.740 [2024-12-05 14:18:41.828228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.740 [2024-12-05 14:18:41.828241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.740 [2024-12-05 14:18:41.828247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.828396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.828551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.828557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.828562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.828567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.840386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.840838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.840850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.840856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.841005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.841155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.841161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.841166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.841170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.852998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.853473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.853486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.853491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.853641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.853790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.853797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.853801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.853806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.865639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.866080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.866091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.866100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.866249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.866398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.866405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.866410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.866415] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.878239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.878699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.878712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.878717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.878867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.879017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.879022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.879027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.879032] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.890862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.891361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.891366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.891520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.891670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.891676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.891681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.891685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.903508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.903995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.904008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.904013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.904162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.904315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.904321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.904326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.904330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.916158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.916642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.916656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.916661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.916811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.916961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.916966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.916971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.916976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.928823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.929308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.929320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.929325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.929479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.929629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.929634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.929639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.929644] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.941469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.941955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.941967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.941973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.741 [2024-12-05 14:18:41.942122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.741 [2024-12-05 14:18:41.942273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.741 [2024-12-05 14:18:41.942278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.741 [2024-12-05 14:18:41.942287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.741 [2024-12-05 14:18:41.942292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.741 [2024-12-05 14:18:41.954121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.741 [2024-12-05 14:18:41.954752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.741 [2024-12-05 14:18:41.954782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.741 [2024-12-05 14:18:41.954790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:41.954955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:41.955108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:41.955114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:41.955119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:41.955125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.742 [2024-12-05 14:18:41.966829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.742 [2024-12-05 14:18:41.967169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.742 [2024-12-05 14:18:41.967185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.742 [2024-12-05 14:18:41.967191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:41.967341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:41.967498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:41.967505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:41.967510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:41.967514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.742 [2024-12-05 14:18:41.979492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.742 [2024-12-05 14:18:41.979978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.742 [2024-12-05 14:18:41.979991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.742 [2024-12-05 14:18:41.979996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:41.980146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:41.980295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:41.980301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:41.980306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:41.980310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.742 [2024-12-05 14:18:41.992187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.742 [2024-12-05 14:18:41.992677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.742 [2024-12-05 14:18:41.992691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.742 [2024-12-05 14:18:41.992696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:41.992846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:41.992996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:41.993002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:41.993007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:41.993011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.742 [2024-12-05 14:18:42.004845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.742 [2024-12-05 14:18:42.005408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.742 [2024-12-05 14:18:42.005438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.742 [2024-12-05 14:18:42.005447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:42.005622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:42.005775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:42.005781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:42.005787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:42.005792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.742 [2024-12-05 14:18:42.017488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.742 [2024-12-05 14:18:42.017980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.742 [2024-12-05 14:18:42.017995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.742 [2024-12-05 14:18:42.018001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:42.018151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:42.018301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:42.018307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:42.018311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:42.018316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:35.742 [2024-12-05 14:18:42.030168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:35.742 [2024-12-05 14:18:42.030625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.742 [2024-12-05 14:18:42.030638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:35.742 [2024-12-05 14:18:42.030647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:35.742 [2024-12-05 14:18:42.030798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:35.742 [2024-12-05 14:18:42.030948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:35.742 [2024-12-05 14:18:42.030954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:35.742 [2024-12-05 14:18:42.030959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:35.742 [2024-12-05 14:18:42.030964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.042796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.043287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.043300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.043306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.004 [2024-12-05 14:18:42.043460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.004 [2024-12-05 14:18:42.043611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.004 [2024-12-05 14:18:42.043616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.004 [2024-12-05 14:18:42.043622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.004 [2024-12-05 14:18:42.043626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.055485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.055933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.055946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.055952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.004 [2024-12-05 14:18:42.056102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.004 [2024-12-05 14:18:42.056253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.004 [2024-12-05 14:18:42.056259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.004 [2024-12-05 14:18:42.056264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.004 [2024-12-05 14:18:42.056268] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.068097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.068671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.068702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.068711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.004 [2024-12-05 14:18:42.068877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.004 [2024-12-05 14:18:42.069035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.004 [2024-12-05 14:18:42.069042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.004 [2024-12-05 14:18:42.069047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.004 [2024-12-05 14:18:42.069053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.080764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.081331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.081361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.081371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.004 [2024-12-05 14:18:42.081544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.004 [2024-12-05 14:18:42.081697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.004 [2024-12-05 14:18:42.081703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.004 [2024-12-05 14:18:42.081708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.004 [2024-12-05 14:18:42.081714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.093403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.093974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.094005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.094014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.004 [2024-12-05 14:18:42.094179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.004 [2024-12-05 14:18:42.094331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.004 [2024-12-05 14:18:42.094337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.004 [2024-12-05 14:18:42.094343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.004 [2024-12-05 14:18:42.094349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.106051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.106594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.106624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.106633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.004 [2024-12-05 14:18:42.106801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.004 [2024-12-05 14:18:42.106954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.004 [2024-12-05 14:18:42.106960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.004 [2024-12-05 14:18:42.106972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.004 [2024-12-05 14:18:42.106978] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.004 [2024-12-05 14:18:42.118666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.004 [2024-12-05 14:18:42.119145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.004 [2024-12-05 14:18:42.119175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.004 [2024-12-05 14:18:42.119184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.119349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.119508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.119515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.119520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.119525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.131360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.131864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.131879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.131885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.132035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.132185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.132190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.132195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.132200] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.144016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.144565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.144595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.144604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.144772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.144924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.144931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.144936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.144941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.156634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.157195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.157224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.157233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.157398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.157557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.157564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.157570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.157576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.169261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.169827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.169858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.169866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.170031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.170185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.170191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.170196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.170202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.181890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.182350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.182365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.182370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.182524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.182676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.182682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.182687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.182692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.194506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.194952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.194965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.194975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.195125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.195275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.195281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.195286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.195290] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.207108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.207463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.207477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.207483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.207633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.207783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.207788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.207793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.207798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.219756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.220237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.220250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.220255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.220404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.220558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.220565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.220570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.220574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.232405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.232876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.232889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.232895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.233044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.005 [2024-12-05 14:18:42.233197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.005 [2024-12-05 14:18:42.233203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.005 [2024-12-05 14:18:42.233208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.005 [2024-12-05 14:18:42.233212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.005 [2024-12-05 14:18:42.245032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.005 [2024-12-05 14:18:42.245521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.005 [2024-12-05 14:18:42.245533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.005 [2024-12-05 14:18:42.245539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.005 [2024-12-05 14:18:42.245688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.006 [2024-12-05 14:18:42.245837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.006 [2024-12-05 14:18:42.245842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.006 [2024-12-05 14:18:42.245847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.006 [2024-12-05 14:18:42.245852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.006 [2024-12-05 14:18:42.257668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.006 [2024-12-05 14:18:42.258152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.006 [2024-12-05 14:18:42.258164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.006 [2024-12-05 14:18:42.258169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.006 [2024-12-05 14:18:42.258319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.006 [2024-12-05 14:18:42.258473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.006 [2024-12-05 14:18:42.258479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.006 [2024-12-05 14:18:42.258484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.006 [2024-12-05 14:18:42.258488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2907744 Killed "${NVMF_APP[@]}" "$@" 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.006 [2024-12-05 14:18:42.270299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2909362 00:28:36.006 [2024-12-05 14:18:42.270775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.006 [2024-12-05 14:18:42.270810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.006 [2024-12-05 14:18:42.270819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2909362 00:28:36.006 [2024-12-05 14:18:42.270986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.006 [2024-12-05 14:18:42.271140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.006 [2024-12-05 14:18:42.271147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.006 [2024-12-05 14:18:42.271153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.006 [2024-12-05 14:18:42.271158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2909362 ']' 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.006 14:18:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:36.006 [2024-12-05 14:18:42.282983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.006 [2024-12-05 14:18:42.283474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.006 [2024-12-05 14:18:42.283489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.006 [2024-12-05 14:18:42.283495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.006 [2024-12-05 14:18:42.283646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.006 [2024-12-05 14:18:42.283796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.006 [2024-12-05 14:18:42.283802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.006 [2024-12-05 14:18:42.283807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.006 [2024-12-05 14:18:42.283812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.006 [2024-12-05 14:18:42.295631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.006 [2024-12-05 14:18:42.296221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.006 [2024-12-05 14:18:42.296251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.006 [2024-12-05 14:18:42.296260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.006 [2024-12-05 14:18:42.296426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.006 [2024-12-05 14:18:42.296584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.006 [2024-12-05 14:18:42.296595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.006 [2024-12-05 14:18:42.296601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.006 [2024-12-05 14:18:42.296606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.267 [2024-12-05 14:18:42.308289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.267 [2024-12-05 14:18:42.308722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.267 [2024-12-05 14:18:42.308738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.267 [2024-12-05 14:18:42.308743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.267 [2024-12-05 14:18:42.308893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.267 [2024-12-05 14:18:42.309043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.267 [2024-12-05 14:18:42.309049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.267 [2024-12-05 14:18:42.309054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.267 [2024-12-05 14:18:42.309059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.267 [2024-12-05 14:18:42.321020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.267 [2024-12-05 14:18:42.321466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.267 [2024-12-05 14:18:42.321481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.267 [2024-12-05 14:18:42.321486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.267 [2024-12-05 14:18:42.321636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.267 [2024-12-05 14:18:42.321786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.267 [2024-12-05 14:18:42.321793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.267 [2024-12-05 14:18:42.321798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.267 [2024-12-05 14:18:42.321804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.267 [2024-12-05 14:18:42.322029] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:36.267 [2024-12-05 14:18:42.322082] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:36.267 [2024-12-05 14:18:42.333646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.267 [2024-12-05 14:18:42.334134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.267 [2024-12-05 14:18:42.334147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.267 [2024-12-05 14:18:42.334153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.267 [2024-12-05 14:18:42.334303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.267 [2024-12-05 14:18:42.334453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.267 [2024-12-05 14:18:42.334468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.267 [2024-12-05 14:18:42.334473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.267 [2024-12-05 14:18:42.334478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.267 [2024-12-05 14:18:42.346289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.267 [2024-12-05 14:18:42.346771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.267 [2024-12-05 14:18:42.346802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.267 [2024-12-05 14:18:42.346811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.267 [2024-12-05 14:18:42.346978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.267 [2024-12-05 14:18:42.347131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.267 [2024-12-05 14:18:42.347137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.267 [2024-12-05 14:18:42.347143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.267 [2024-12-05 14:18:42.347149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.267 [2024-12-05 14:18:42.358981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.267 [2024-12-05 14:18:42.359504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.267 [2024-12-05 14:18:42.359525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.267 [2024-12-05 14:18:42.359531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.267 [2024-12-05 14:18:42.359687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.359838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.359844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.359849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.359854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.371655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.372225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.372255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.372263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.372429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.372590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.372597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.372603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.372613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.384313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.384878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.384909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.384918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.385084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.385237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.385243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.385249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.385255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.396950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.397448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.397468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.397474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.397624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.397775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.397782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.397789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.397793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.409619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.410068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.410080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.410086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.410235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.410385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.410391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.410396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.410401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.411925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:36.268 [2024-12-05 14:18:42.422234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.422714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.422728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.422733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.422884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.423034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.423040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.423045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.423050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.434895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.435362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.435375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.435381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.435572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.435723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.435730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.435735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.435739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.441233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:36.268 [2024-12-05 14:18:42.441257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:36.268 [2024-12-05 14:18:42.441264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:36.268 [2024-12-05 14:18:42.441269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:36.268 [2024-12-05 14:18:42.441273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:36.268 [2024-12-05 14:18:42.442356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:36.268 [2024-12-05 14:18:42.442509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:36.268 [2024-12-05 14:18:42.442725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.268 [2024-12-05 14:18:42.447568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.448032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.448046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.448051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.268 [2024-12-05 14:18:42.448201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.268 [2024-12-05 14:18:42.448356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.268 [2024-12-05 14:18:42.448363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.268 [2024-12-05 14:18:42.448369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.268 [2024-12-05 14:18:42.448374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.268 [2024-12-05 14:18:42.460197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.268 [2024-12-05 14:18:42.460797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.268 [2024-12-05 14:18:42.460830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.268 [2024-12-05 14:18:42.460840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.461012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.461165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.461171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.461177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.461183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.472875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.473500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.473532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.473541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.473712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.473865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.473871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.473877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.473883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.485576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.486143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.486175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.486183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.486350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.486507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.486515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.486525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.486531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.498215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.498816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.498846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.498855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.499021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.499174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.499180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.499186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.499191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.510880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.511422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.511453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.511467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.511633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.511786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.511792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.511798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.511804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.523495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.523745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.523760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.523766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.523917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.524066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.524073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.524078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.524083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.536196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.536800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.536830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.536839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.537004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.537157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.537163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.537168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.537174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.548856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.549320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.549335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.549340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.549496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.549647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.549653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.549658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.549663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.269 [2024-12-05 14:18:42.561470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.269 [2024-12-05 14:18:42.562016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.269 [2024-12-05 14:18:42.562046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.269 [2024-12-05 14:18:42.562056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.269 [2024-12-05 14:18:42.562222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.269 [2024-12-05 14:18:42.562374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.269 [2024-12-05 14:18:42.562381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.269 [2024-12-05 14:18:42.562387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.269 [2024-12-05 14:18:42.562393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.530 [2024-12-05 14:18:42.574078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.530 [2024-12-05 14:18:42.574447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.530 [2024-12-05 14:18:42.574469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.530 [2024-12-05 14:18:42.574479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.530 [2024-12-05 14:18:42.574631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.530 [2024-12-05 14:18:42.574782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.530 [2024-12-05 14:18:42.574789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.530 [2024-12-05 14:18:42.574794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.530 [2024-12-05 14:18:42.574800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.530 [2024-12-05 14:18:42.586762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.530 [2024-12-05 14:18:42.587321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.530 [2024-12-05 14:18:42.587351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.530 [2024-12-05 14:18:42.587360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.530 [2024-12-05 14:18:42.587532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.530 [2024-12-05 14:18:42.587685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.530 [2024-12-05 14:18:42.587691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.530 [2024-12-05 14:18:42.587698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.530 [2024-12-05 14:18:42.587703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.530 [2024-12-05 14:18:42.599381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.530 [2024-12-05 14:18:42.599930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.530 [2024-12-05 14:18:42.599961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.530 [2024-12-05 14:18:42.599970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.530 [2024-12-05 14:18:42.600136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.530 [2024-12-05 14:18:42.600289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.530 [2024-12-05 14:18:42.600295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.530 [2024-12-05 14:18:42.600301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.530 [2024-12-05 14:18:42.600306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.530 [2024-12-05 14:18:42.612019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.530 [2024-12-05 14:18:42.612468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.530 [2024-12-05 14:18:42.612484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.530 [2024-12-05 14:18:42.612490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.530 [2024-12-05 14:18:42.612641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.530 [2024-12-05 14:18:42.612795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.530 [2024-12-05 14:18:42.612802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.530 [2024-12-05 14:18:42.612807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.530 [2024-12-05 14:18:42.612812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.530 [2024-12-05 14:18:42.624644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.530 [2024-12-05 14:18:42.625105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.530 [2024-12-05 14:18:42.625135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.625144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.625309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.625476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.625483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.625489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.625494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.637238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.637844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.637874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.637883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.638048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.638201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.638207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.638213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.638218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.649908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.650490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.650520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.650529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.650697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.650850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.650856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.650865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.650871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.662559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.662889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.662904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.662909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.663059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.663209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.663214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.663219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.663224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.675197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.675666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.675680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.675686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.675837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.675986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.675992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.675997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.676002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.687815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.688310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.688322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.688327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.688481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.688632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.688637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.688642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.688647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.700419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.700884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.700914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.700923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.701088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.701241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.701247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.701253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.701258] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.713082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.713663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.713693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.713702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.713867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.714020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.714026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.714031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.714037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.725739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.726281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.726311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.726319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.726491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.726644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.726650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.726656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.726661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.738337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.738803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.738819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.738830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.738981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.739131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.739137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.739142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.739147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.750963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.751511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.751541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.751550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.751718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.751871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.751877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.751883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.751888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.763571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.764040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.764055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.764061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.764211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.764361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.764366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.764371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.764376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.776187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.776740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.776771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.776780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.776945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.777101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.777108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.777113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.777119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.788801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.789357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.789395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.789567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.789720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.789727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.789732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.789737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 [2024-12-05 14:18:42.801413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.801763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.801778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.801784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.801934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.802084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.802089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.802094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.802099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.531 4182.50 IOPS, 16.34 MiB/s [2024-12-05T13:18:42.831Z] [2024-12-05 14:18:42.815187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.531 [2024-12-05 14:18:42.815788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.531 [2024-12-05 14:18:42.815819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.531 [2024-12-05 14:18:42.815827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.531 [2024-12-05 14:18:42.815993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.531 [2024-12-05 14:18:42.816145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.531 [2024-12-05 14:18:42.816153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.531 [2024-12-05 14:18:42.816163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.531 [2024-12-05 14:18:42.816169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.827871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.828339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.828353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.828359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.828515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.828666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.828672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.828677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.828682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.840496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.840942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.840955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.840960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.841109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.841259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.841265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.841270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.841275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.853091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.853558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.853589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.853597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.853766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.853919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.853925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.853930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.853935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.865769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.866330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.866360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.866369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.866541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.866694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.866700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.866706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.866711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.878394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.878928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.878959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.878967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.879132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.879285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.879291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.879297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.879302] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.891124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.891693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.891723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.891731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.891897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.892049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.892055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.892060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.892066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.903749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.904301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.904331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.904343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.904516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.904669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.904675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.904681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.904686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.916359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.916957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.916987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.916995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.917161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.917313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.917320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.917325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.917330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.929030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.929511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.929541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.929550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.816 [2024-12-05 14:18:42.929715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.816 [2024-12-05 14:18:42.929868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.816 [2024-12-05 14:18:42.929875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.816 [2024-12-05 14:18:42.929880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.816 [2024-12-05 14:18:42.929886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.816 [2024-12-05 14:18:42.941719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.816 [2024-12-05 14:18:42.942300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.816 [2024-12-05 14:18:42.942330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.816 [2024-12-05 14:18:42.942339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:42.942511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:42.942669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:42.942675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:42.942681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:42.942686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:42.954362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:42.954949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:42.954979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:42.954988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:42.955154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:42.955307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:42.955313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:42.955318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:42.955324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:42.967015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:42.967583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:42.967614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:42.967623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:42.967788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:42.967941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:42.967948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:42.967953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:42.967959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:42.979645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:42.980186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:42.980217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:42.980225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:42.980391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:42.980550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:42.980557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:42.980566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:42.980571] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:42.992244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:42.992864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:42.992894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:42.992903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:42.993069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:42.993221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:42.993227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:42.993233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:42.993238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:43.004923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:43.005381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:43.005396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:43.005402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:43.005556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:43.005706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:43.005711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:43.005716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:43.005721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:43.017526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:43.017937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:43.017949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:43.017954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:43.018104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:43.018253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:43.018259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:43.018264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:43.018269] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:43.030140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:43.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:43.030478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:43.030484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:43.030634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:43.030784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:43.030790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:43.030795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:43.030799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:43.042749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:43.043197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:43.043210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:43.043215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:43.043365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:43.043519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:43.043527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:43.043532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:43.043536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:43.055346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:43.055875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:43.055905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:43.055915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.817 [2024-12-05 14:18:43.056080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.817 [2024-12-05 14:18:43.056233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.817 [2024-12-05 14:18:43.056239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.817 [2024-12-05 14:18:43.056245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.817 [2024-12-05 14:18:43.056251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.817 [2024-12-05 14:18:43.067935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.817 [2024-12-05 14:18:43.068488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.817 [2024-12-05 14:18:43.068518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.817 [2024-12-05 14:18:43.068531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.818 [2024-12-05 14:18:43.068696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.818 [2024-12-05 14:18:43.068850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.818 [2024-12-05 14:18:43.068857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.818 [2024-12-05 14:18:43.068863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.818 [2024-12-05 14:18:43.068869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.818 [2024-12-05 14:18:43.080554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.818 [2024-12-05 14:18:43.081105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.818 [2024-12-05 14:18:43.081136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.818 [2024-12-05 14:18:43.081145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.818 [2024-12-05 14:18:43.081310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.818 [2024-12-05 14:18:43.081470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.818 [2024-12-05 14:18:43.081477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.818 [2024-12-05 14:18:43.081483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.818 [2024-12-05 14:18:43.081488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.818 [2024-12-05 14:18:43.093170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.818 [2024-12-05 14:18:43.093828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.818 [2024-12-05 14:18:43.093858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:36.818 [2024-12-05 14:18:43.093866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:36.818 [2024-12-05 14:18:43.094032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:36.818 [2024-12-05 14:18:43.094185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.818 [2024-12-05 14:18:43.094191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.818 [2024-12-05 14:18:43.094196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.818 [2024-12-05 14:18:43.094201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.123 [2024-12-05 14:18:43.105888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.123 [2024-12-05 14:18:43.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.123 [2024-12-05 14:18:43.106393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.106398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.106553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.106709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.106715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.106720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.106725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 [2024-12-05 14:18:43.118538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.124 [2024-12-05 14:18:43.118946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.118960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.118966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.119115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:37.124 [2024-12-05 14:18:43.119265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.119272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.119277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.119282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.124 [2024-12-05 14:18:43.131262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.131510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.131523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.131529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.131679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.131829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.131835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.131840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.131845] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 [2024-12-05 14:18:43.143948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.144401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.144413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.144419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.144577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.144727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.144734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.144739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.144743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 [2024-12-05 14:18:43.156562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.157001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.157032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.157040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.157206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.157359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.157366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.157371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.157377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.124 [2024-12-05 14:18:43.167442] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.124 [2024-12-05 14:18:43.169204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.169798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.169828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.169837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.170003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.170156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.170162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.170167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.170173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.124 [2024-12-05 14:18:43.181869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.182431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.182467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.182476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.182644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.182797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.182803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.182809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.182815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 [2024-12-05 14:18:43.194495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.194951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.194981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.194990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.195159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.124 [2024-12-05 14:18:43.195311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.124 [2024-12-05 14:18:43.195317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.124 [2024-12-05 14:18:43.195323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.124 [2024-12-05 14:18:43.195329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.124 Malloc0 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.124 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.124 [2024-12-05 14:18:43.207150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.124 [2024-12-05 14:18:43.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.124 [2024-12-05 14:18:43.207614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.124 [2024-12-05 14:18:43.207620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.124 [2024-12-05 14:18:43.207770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.125 [2024-12-05 14:18:43.207920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.125 [2024-12-05 14:18:43.207926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.125 [2024-12-05 14:18:43.207931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.125 [2024-12-05 14:18:43.207940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.125 [2024-12-05 14:18:43.219757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.125 [2024-12-05 14:18:43.220302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.125 [2024-12-05 14:18:43.220332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x224e010 with addr=10.0.0.2, port=4420 00:28:37.125 [2024-12-05 14:18:43.220340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e010 is same with the state(6) to be set 00:28:37.125 [2024-12-05 14:18:43.220512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x224e010 (9): Bad file descriptor 00:28:37.125 [2024-12-05 14:18:43.220666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.125 [2024-12-05 14:18:43.220672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.125 [2024-12-05 14:18:43.220677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.125 [2024-12-05 14:18:43.220683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.125 [2024-12-05 14:18:43.229592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.125 [2024-12-05 14:18:43.232380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.125 14:18:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2908121 00:28:37.125 [2024-12-05 14:18:43.308722] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:38.639 4523.71 IOPS, 17.67 MiB/s [2024-12-05T13:18:45.893Z] 5549.25 IOPS, 21.68 MiB/s [2024-12-05T13:18:46.833Z] 6373.22 IOPS, 24.90 MiB/s [2024-12-05T13:18:48.217Z] 7024.30 IOPS, 27.44 MiB/s [2024-12-05T13:18:49.158Z] 7561.82 IOPS, 29.54 MiB/s [2024-12-05T13:18:50.100Z] 7991.83 IOPS, 31.22 MiB/s [2024-12-05T13:18:51.045Z] 8365.08 IOPS, 32.68 MiB/s [2024-12-05T13:18:51.993Z] 8691.57 IOPS, 33.95 MiB/s [2024-12-05T13:18:51.993Z] 8965.67 IOPS, 35.02 MiB/s 00:28:45.693 Latency(us) 00:28:45.693 [2024-12-05T13:18:51.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:45.693 Verification LBA range: start 0x0 length 0x4000 00:28:45.693 Nvme1n1 : 15.01 8968.06 35.03 13874.74 0.00 5585.55 542.72 16711.68 00:28:45.693 [2024-12-05T13:18:51.993Z] =================================================================================================================== 00:28:45.693 [2024-12-05T13:18:51.993Z] Total : 8968.06 35.03 13874.74 0.00 5585.55 542.72 16711.68 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.693 14:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.693 rmmod nvme_tcp 00:28:45.693 rmmod nvme_fabrics 00:28:45.955 rmmod nvme_keyring 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2909362 ']' 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2909362 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2909362 ']' 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2909362 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909362 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909362' 00:28:45.955 killing process with pid 2909362 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2909362 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2909362 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.955 14:18:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:48.504 00:28:48.504 real 0m28.511s 00:28:48.504 user 1m4.176s 00:28:48.504 sys 0m7.765s 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.504 ************************************ 00:28:48.504 END TEST nvmf_bdevperf 00:28:48.504 ************************************ 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.504 ************************************ 00:28:48.504 START TEST nvmf_target_disconnect 00:28:48.504 ************************************ 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:48.504 * Looking for test storage... 00:28:48.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:48.504 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:48.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.505 --rc genhtml_branch_coverage=1 00:28:48.505 --rc genhtml_function_coverage=1 00:28:48.505 --rc genhtml_legend=1 00:28:48.505 --rc geninfo_all_blocks=1 00:28:48.505 --rc geninfo_unexecuted_blocks=1 00:28:48.505 00:28:48.505 ' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:48.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.505 --rc genhtml_branch_coverage=1 00:28:48.505 --rc genhtml_function_coverage=1 00:28:48.505 --rc genhtml_legend=1 00:28:48.505 --rc geninfo_all_blocks=1 00:28:48.505 --rc geninfo_unexecuted_blocks=1 00:28:48.505 00:28:48.505 ' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:48.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.505 --rc genhtml_branch_coverage=1 00:28:48.505 --rc genhtml_function_coverage=1 00:28:48.505 --rc genhtml_legend=1 00:28:48.505 --rc geninfo_all_blocks=1 00:28:48.505 --rc geninfo_unexecuted_blocks=1 00:28:48.505 00:28:48.505 ' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:48.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.505 --rc genhtml_branch_coverage=1 00:28:48.505 --rc genhtml_function_coverage=1 00:28:48.505 --rc genhtml_legend=1 00:28:48.505 --rc geninfo_all_blocks=1 00:28:48.505 --rc geninfo_unexecuted_blocks=1 00:28:48.505 00:28:48.505 ' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.505 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:48.506 14:18:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:56.652 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:56.652 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:56.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.652 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.653 14:19:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:28:56.653 00:28:56.653 --- 10.0.0.2 ping statistics --- 00:28:56.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.653 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:28:56.653 00:28:56.653 --- 10.0.0.1 ping statistics --- 00:28:56.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.653 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:56.653 ************************************ 00:28:56.653 START TEST nvmf_target_disconnect_tc1 00:28:56.653 ************************************ 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.653 [2024-12-05 14:19:02.354801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.653 [2024-12-05 14:19:02.354898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219dae0 with addr=10.0.0.2, port=4420 00:28:56.653 [2024-12-05 14:19:02.354939] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:56.653 [2024-12-05 14:19:02.354951] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:56.653 [2024-12-05 14:19:02.354961] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:56.653 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:56.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:56.653 Initializing NVMe Controllers 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:56.653 00:28:56.653 real 0m0.142s 00:28:56.653 user 0m0.067s 00:28:56.653 sys 0m0.076s 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.653 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:56.654 ************************************ 00:28:56.654 END TEST nvmf_target_disconnect_tc1 00:28:56.654 ************************************ 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:56.654 ************************************ 00:28:56.654 START TEST nvmf_target_disconnect_tc2 00:28:56.654 ************************************ 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2915502 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2915502 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2915502 ']' 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.654 14:19:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.654 [2024-12-05 14:19:02.522947] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:28:56.654 [2024-12-05 14:19:02.523007] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.654 [2024-12-05 14:19:02.623040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.654 [2024-12-05 14:19:02.675624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.654 [2024-12-05 14:19:02.675676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.654 [2024-12-05 14:19:02.675685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.654 [2024-12-05 14:19:02.675692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.654 [2024-12-05 14:19:02.675698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.654 [2024-12-05 14:19:02.677726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:56.654 [2024-12-05 14:19:02.677886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:56.654 [2024-12-05 14:19:02.678048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.654 [2024-12-05 14:19:02.678048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.228 Malloc0 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:57.228 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.229 [2024-12-05 14:19:03.431831] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.229 [2024-12-05 14:19:03.472243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2915639 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:57.229 14:19:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.798 14:19:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2915502 00:28:59.798 14:19:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Read completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.798 Write completed with error (sct=0, sc=8) 00:28:59.798 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 [2024-12-05 14:19:05.510763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Read completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 Write completed with error (sct=0, sc=8) 00:28:59.799 starting I/O failed 00:28:59.799 [2024-12-05 14:19:05.511140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:59.799 [2024-12-05 14:19:05.511443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.511473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.511758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.511818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.512066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.512088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.512441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.512453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.512965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.513029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.513330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.513345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.513546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.513560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.513860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.513873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.514226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.514238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.514548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.514560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.514679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.514691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.514974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.514986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.515298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.515310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.515626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.515639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.516022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.516034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.516339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.516358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.516696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.516709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.517055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.517067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.517386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.517400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.517527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.517540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.517909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.517922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.518227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.518239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.518548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.799 [2024-12-05 14:19:05.518867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.799 [2024-12-05 14:19:05.518879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.799 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.519228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.519240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.519583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.519596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.519800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.519812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.520169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.520181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.520399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.520410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.520727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.520739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.521052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.521064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.521409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.521420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.521697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.521709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.522018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.522031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.522384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.522395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.522740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.522753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.523074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.523086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.523294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.523307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.523644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.523656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.523984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.523996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.524214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.524226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.524526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.524540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.524910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.524926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.525291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.525302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.525501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.525513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.525822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.525834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.526202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.526214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.526391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.526404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.526727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.526740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.527053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.527065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.527246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.527259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.527613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.527625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.527978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.528295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.528306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.528646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.528659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.528849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.529166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.529178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.529379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.529393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.529866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.529879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.530176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.530188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.530399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.530411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.530710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.800 [2024-12-05 14:19:05.530722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.800 qpair failed and we were unable to recover it. 00:28:59.800 [2024-12-05 14:19:05.531051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.531062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.531413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.531426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.531790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.531803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.532102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.532445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.532465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.532800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.532811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.533144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.533439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.533450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.533823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.533834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.534156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.534167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.534497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.534508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.534824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.534835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.535039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.535050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.535366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.535376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.535692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.535704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.536041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.536052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.536398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.536410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.536741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.536753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.536952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.536963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.537269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.537279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.537647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.537661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.537982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.537993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.538388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.538398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.538716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.538726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.539121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.539133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.539337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.539352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.539706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.539720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.540069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.540083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.540282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.540297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.540632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.540646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.540962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.540976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.541300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.541313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.541469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.541485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.541838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.541851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.542210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.542224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.542544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.542560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.542863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.542878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.543187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.543200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.801 [2024-12-05 14:19:05.543426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.801 [2024-12-05 14:19:05.543439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.801 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.543804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.543818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.544055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.544068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.544441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.544471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.544696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.544711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.544914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.544928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.545235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.545249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.545538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.545552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.545893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.546133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.546146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.546439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.546463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.546766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.546779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.547082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.547095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.547427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.547440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.547780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.547794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.548117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.548131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.548488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.548501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.548825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.548840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.549165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.549179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.549477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.549490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.549721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.549735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.549985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.549999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.550215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.550229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.550579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.550593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.550797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.550810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.551022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.551036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.551392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.551410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.551739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.551758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.552110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.552127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.552474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.552493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.552834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.552851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.553180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.553198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.553592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.553611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.553966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.553983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.554307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.554325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.554685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.554703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.555045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.555063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.555405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.555426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.555654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.802 [2024-12-05 14:19:05.555674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.802 qpair failed and we were unable to recover it. 00:28:59.802 [2024-12-05 14:19:05.555970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.555987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.556330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.556349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.556684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.556702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.557035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.557052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.557263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.557283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.557506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.557524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.557834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.557851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.558092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.558109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.558428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.558447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.558753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.558771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.559110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.559128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.559441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.559466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.559793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.559810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.560123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.560140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.560471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.560491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.560808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.560825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.561167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.561184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.561498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.561516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.561740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.561757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.562093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.562109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.562368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.562385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.562628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.562648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.562946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.562963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.563284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.563301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.563619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.563641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.563983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.564004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.564348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.564370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.564702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.564727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.565059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.565080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.565440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.565474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.565790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.565810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.566151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.566173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.566497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.566521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.803 [2024-12-05 14:19:05.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.803 [2024-12-05 14:19:05.566861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.803 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.567173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.567521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.567544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.567871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.567894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.568234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.568255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.568585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.568607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.568990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.569012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.569391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.569414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.569832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.569855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.570193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.570215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.570521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.570544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.570947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.570968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.571297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.571319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.571589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.571613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.571957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.571979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.572304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.572325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.572629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.572651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.572988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.573009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.573334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.573354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.573698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.573721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.574128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.574150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.574491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.574515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.574891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.574912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.575240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.575261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.575487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.575510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.575938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.575967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.576334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.576363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.576738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.576770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.577105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.577133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.577488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.577520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.577879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.577908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.578284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.578313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.578676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.578707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.579049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.579084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.579407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.579436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.579807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.579837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.580201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.580230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.580606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.580637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.580998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.581027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.804 qpair failed and we were unable to recover it. 00:28:59.804 [2024-12-05 14:19:05.581391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.804 [2024-12-05 14:19:05.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.581634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.581668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.581935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.581964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.582338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.582368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.582696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.582728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.582985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.583014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.583355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.583384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.583731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.583762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.584117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.584147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.584564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.584595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.584836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.584864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.585276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.585306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.585726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.585757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.586106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.586136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.586518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.586550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.586919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.586947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.587284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.587315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.587722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.587753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.588097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.588127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.588498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.588529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.588793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.588822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.589189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.589229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.589567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.589598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.589979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.590009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.590345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.590375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.590746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.590777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.591115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.591149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.591487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.591519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.591856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.591886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.592258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.592288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.592665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.592695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.592988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.593016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.593397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.593426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.593805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.593835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.594159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.594190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.594547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.594579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.594934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.594965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.595223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.595256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.595592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.805 [2024-12-05 14:19:05.595623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.805 qpair failed and we were unable to recover it. 00:28:59.805 [2024-12-05 14:19:05.595952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.595980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.596299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.596693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.596723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.597100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.597128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.597494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.597525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.597872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.597901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.598207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.598237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.598599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.598631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.598988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.599017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.599376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.599412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.599781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.599813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.600180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.600210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.600496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.600528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.600905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.600934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.601307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.601337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.601699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.601729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.602107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.602136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.602503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.602533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.602682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.602711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.603083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.603111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.603495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.603526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.603949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.603978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.604323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.604352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.604718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.604749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.605112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.605141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.605516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.605546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.605896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.605926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.606191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.606220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.606490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.606521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.606903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.606932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.607258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.607287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.607615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.607647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.607981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.608011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.608368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.608398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.608903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.608942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.609320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.609356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.609775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.609807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.610208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.610238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.806 [2024-12-05 14:19:05.610607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.806 [2024-12-05 14:19:05.610639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.806 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.610907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.610936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.611303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.611332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.611690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.611722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.612111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.612139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.612473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.612506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.612880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.612910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.613173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.613201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.613534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.613565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.613937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.613966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.614332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.614360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.614626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.614661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.615033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.615063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.615428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.615468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.615747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.615777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.616154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.616182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.616538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.616567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.616961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.617314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.617343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.617583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.617613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.617875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.617908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.618295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.618324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.618580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.618611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.618993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.619023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.619365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.619395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.619787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.619819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.620079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.620107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.620453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.620497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.620839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.620868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.621140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.621169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.621430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.621469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.621840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.621870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.622243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.622271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.622662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.623106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.623135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.807 qpair failed and we were unable to recover it. 00:28:59.807 [2024-12-05 14:19:05.623511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.807 [2024-12-05 14:19:05.623541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.623920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.623949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.624297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.624326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.624736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.624767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.625226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.625261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.625602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.625633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.626001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.626032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.626383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.626413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.626822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.626854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.627199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.627237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.627487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.627519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.627876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.627904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.628285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.628315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.628560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.628589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.628964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.628994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.629365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.629394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.629839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.629869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.630314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.630342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.630702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.630734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.631097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.631127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.631475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.631507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.631832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.631861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.632220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.632249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.632594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.632624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.633005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.633033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.633391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.633420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.633890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.633922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.634136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.634164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.634434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.634477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.634880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.634909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.635274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.635304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.635549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.635588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.635941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.635971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.636342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.636370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.636808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.637166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.637196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.637536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.637566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.637910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.637940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.638312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.808 [2024-12-05 14:19:05.638341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.808 qpair failed and we were unable to recover it. 00:28:59.808 [2024-12-05 14:19:05.638690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.638721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.639088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.639119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.639476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.639506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.639753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.639782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.640007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.640037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.640300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.640328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.640682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.640714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.641070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.641100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.641450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.641491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.641854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.641883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.642236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.642268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.642606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.642637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.643013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.643041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.643404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.643433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.643783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.643812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.644181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.644212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.644541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.644571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.644920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.644951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.645271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.645299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.645597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.645626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.645981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.646010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.646338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.646369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.646733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.646762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.647169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.647198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.647525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.647555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.647944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.647972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.648335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.648363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.648684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.648714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.649076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.649106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.649338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.649377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.649713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.649744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.650002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.650031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.650419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.650448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.650806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.650836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.651191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.651220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.651602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.651633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.651902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.651930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.652296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.652326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.809 [2024-12-05 14:19:05.652682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.809 [2024-12-05 14:19:05.652714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.809 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.653089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.653118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.653489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.653521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.653882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.653912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.654236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.654264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.654617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.654648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.655008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.655036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.655416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.655445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.655859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.655889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.656245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.656273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.656605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.656636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.656989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.657017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.657408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.657437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.657781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.657811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.658168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.658197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.658567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.658598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.658980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.659008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.659383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.659412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.659794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.659824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.660170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.660198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.660542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.660573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.660895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.660925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.661306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.661341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.661688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.661718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.662000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.662028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.662411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.662440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.662868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.663251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.663281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.663606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.663638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.663962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.663990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.664324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.664353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.664613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.664644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.664999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.665027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.665383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.665412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.665767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.665798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.666162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.666190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.666594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.666629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.667006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.667037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.667382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.810 [2024-12-05 14:19:05.667411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.810 qpair failed and we were unable to recover it. 00:28:59.810 [2024-12-05 14:19:05.667610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.667640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.667950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.667978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.668349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.668379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.668773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.668803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.669172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.669201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.669532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.669561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.669903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.669932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.670265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.670295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.670630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.670660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.671020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.671048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.671377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.671413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.671769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.671801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.672149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.672178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.672419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.672452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.672813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.672842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.673205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.673234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.673568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.673600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.673963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.673991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.674351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.674379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.674761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.674791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.675140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.675169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.675535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.675566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.675817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.675847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.676088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.676116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.676564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.676595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.676799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.676831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.677212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.677241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.677618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.677650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.678005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.678033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.678391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.678421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.678790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.678821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.679080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.679111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.679339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.679369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.679756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.679786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.680115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.680144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.680501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.680531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.680900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.680928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.681272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.681308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.811 [2024-12-05 14:19:05.681637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.811 [2024-12-05 14:19:05.681669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.811 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.682017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.682047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.682387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.682416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.682828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.682858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.683084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.683115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.683367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.683397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.683678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.683709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.684065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.684095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.684483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.684514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.684847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.684880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.685241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.685269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.685642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.685673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.686005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.686033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.686385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.686415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.686812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.686842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.687139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.687168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.687541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.687572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.687833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.687861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.688268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.688298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.688649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.688681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.689009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.689037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.689397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.689427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.689797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.689828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.690199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.690228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.690476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.690511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.690875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.690903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.691231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.691263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.691629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.691661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.691978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.692007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.692355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.692384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.692755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.692785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.693148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.693177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.693555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.693585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.693828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.693857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.694204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.694232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.812 qpair failed and we were unable to recover it. 00:28:59.812 [2024-12-05 14:19:05.694542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.812 [2024-12-05 14:19:05.694571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.694920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.694948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.695314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.695342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.695674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.695704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.696072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.696102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.696453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.696503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.696908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.696937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.697327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.697356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.697716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.697747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.698122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.698152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.698515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.698906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.698934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.699287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.699316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.699642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.699672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.699957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.699986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.700350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.700379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.700816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.700846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.701203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.701231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.701598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.701628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.701993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.702022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.702390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.702418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.702764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.702795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.703041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.703074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.703441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.703486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.703707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.703739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.704113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.704142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.704514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.704545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.704934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.704962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.705290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.705320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.705726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.705756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.706153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.706486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.706517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.706799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.706834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.707180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.707210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.707550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.707581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.707893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.707921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.708234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.708265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.708615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.708647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.708987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.813 [2024-12-05 14:19:05.709017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.813 qpair failed and we were unable to recover it. 00:28:59.813 [2024-12-05 14:19:05.709390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.709419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.709696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.709727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.710105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.710133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.710509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.710539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.710893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.710922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.711274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.711303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.711650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.712057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.712087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.712475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.712506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.712848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.712878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.713136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.713164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.713507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.713537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.713914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.713945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.714341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.714370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.714709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.714739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.715082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.715111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.715484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.715515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.715873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.715902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.716266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.716296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.716613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.716646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.716988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.717022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.717387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.717416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.717764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.717795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.718013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.718042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.718374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.718405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.718797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.718829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.719192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.719221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.719476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.719510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.719886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.719915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.720177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.720206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.720573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.720604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.720839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.720871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.721229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.721258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.721612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.721643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.721996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.722376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.722405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.722770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.722799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.723163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.723192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.814 qpair failed and we were unable to recover it. 00:28:59.814 [2024-12-05 14:19:05.723552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.814 [2024-12-05 14:19:05.723582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.723966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.723994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.724343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.724372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.724757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.724788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.725167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.725196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.725441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.725485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.725757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.725786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.726152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.726181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.726539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.726569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.726918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.726947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.727180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.727212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.727622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.727653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.728033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.728062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.728427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.728492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.728848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.728876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.729236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.729265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.729634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.729664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.730006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.730036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.730380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.730409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.730782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.730812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.731157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.731185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.731530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.731560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.731937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.731967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.732337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.732369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.732714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.732745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.733115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.733144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.733494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.733524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.733854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.733883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.734210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.734242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.734509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.734539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.734908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.734937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.735328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.735358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.735690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.735720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.736081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.736111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.736482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.736513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.736869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.736897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.737280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.737309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.737561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.737593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.737765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.815 [2024-12-05 14:19:05.737793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.815 qpair failed and we were unable to recover it. 00:28:59.815 [2024-12-05 14:19:05.738169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.738198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.738552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.738582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.738941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.738971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.739332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.739362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.739724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.739754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.740112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.740140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.740385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.740416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.740816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.740847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.741219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.741248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.741597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.741628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.742001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.742030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.742387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.742429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.742772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.742803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.743172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.743203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.743574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.743604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.743978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.744007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.744358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.744386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.744741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.744771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.745106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.745136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.745435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.745479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.745845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.745875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.746239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.746268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.746542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.746572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.746901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.746930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.747308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.747336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.747683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.747716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.748060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.748089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.748474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.748504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.748737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.748769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.749003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.749038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.749373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.749401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.749809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.749840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.750182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.750213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.750556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.750587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.750914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.816 [2024-12-05 14:19:05.750944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.816 qpair failed and we were unable to recover it. 00:28:59.816 [2024-12-05 14:19:05.751323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.751353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.751640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.751670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.752048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.752078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.752416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.752452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.752876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.752906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.753235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.753264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.753630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.753660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.754023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.754052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.754410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.754440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.754795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.754825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.755191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.755219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.755588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.755618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.755967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.755996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.756365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.756394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.756659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.756689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.757031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.757061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.757299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.757331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.757684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.757715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.758087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.758116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.758491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.758523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.758895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.758923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.759269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.759647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.759679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.759998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.760027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.760387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.760415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.760794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.760824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.761196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.761226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.761569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.761599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.761960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.761990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.762321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.762349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.762791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.762828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.763215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.763245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.763702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.763735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.764085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.764115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.764493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.764525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.764868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.764899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.765235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.765264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.765626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.817 [2024-12-05 14:19:05.765657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.817 qpair failed and we were unable to recover it. 00:28:59.817 [2024-12-05 14:19:05.766033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.766063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.766422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.766452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.766789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.766818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.767183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.767213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.767559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.767590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.767925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.767954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.768542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.768666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.769114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.769153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.769414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.769445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.769845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.769876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.770224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.770257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.770609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.771004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.771033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.771383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.771414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.771578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.771615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.771780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.771808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.772175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.772205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.772570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.772605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.773000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.773030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.773402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.773445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.773842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.773874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.774107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.774140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.774514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.774546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.774904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.774933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.775302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.775331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.775587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.775617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.775997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.776027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.776322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.776352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.776688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.776719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.777050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.777079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.777489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.777521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.777887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.777918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.778276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.778304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.778675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.778706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.779060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.779091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.779452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.779492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.779839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.779868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.780231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.780262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.818 qpair failed and we were unable to recover it. 00:28:59.818 [2024-12-05 14:19:05.780630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.818 [2024-12-05 14:19:05.780663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.781022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.781052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.781422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.781452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.781871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.781901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.782266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.782296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.782640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.782671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.783042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.783072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.783427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.783469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.783847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.783878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.784237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.784266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.784629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.784662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.784922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.784953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.785302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.785331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.785677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.785710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.786056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.786087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.786470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.786502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.786941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.786971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.787348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.787379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.787743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.787776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.788137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.788169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.788409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.788442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.788881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.788920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.789257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.789286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.789635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.789667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.789974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.790005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.790356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.790388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.790748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.790780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.791179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.791210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.791572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.791604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.791967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.791998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.792358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.792388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.792759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.792790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.793166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.793197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.793561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.793592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.793954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.793984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.794338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.794368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.794602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.794636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.795065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.795095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.819 qpair failed and we were unable to recover it. 00:28:59.819 [2024-12-05 14:19:05.795432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.819 [2024-12-05 14:19:05.795476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.797490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.797559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.797998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.798037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.798390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.798422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.798800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.798832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.799197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.799226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.799496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.799527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.799899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.799930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.800290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.800321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.800713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.800744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.801087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.801119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.801500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.801533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.801789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.801821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.802092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.802120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.802479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.802513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.802871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.802902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.803249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.803279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.803658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.803691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.804048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.804078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.804473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.804508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.804879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.804910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.805277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.805308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.805676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.805706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.806067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.806104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.806407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.806437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.806810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.806841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.807180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.807210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.807577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.807613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.807989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.808020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.808393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.808423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.808841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.808872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.809213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.809243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.809593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.809624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.809980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.810011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.810373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.810403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.810775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.810809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.811168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.811197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.811558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.811591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.820 [2024-12-05 14:19:05.811960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.820 [2024-12-05 14:19:05.811990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.820 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.812355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.812387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.812743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.812774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.813138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.813168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.813542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.813574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.813850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.813886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.814128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.814163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.814517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.814550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.814910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.814943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.815302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.815333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.815692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.815725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.816085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.816118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.816480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.816511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.816917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.816946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.817310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.817343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.817580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.817616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.817853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.817885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.818274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.818306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.818677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.818708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.819067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.819099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.819470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.819504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.819865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.819898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.820249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.820281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.820627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.820658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.821076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.821107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.821441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.821506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.821858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.821887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.822243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.822275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.822595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.822629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.822973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.823002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.823395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.823428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.823810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.823846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.824214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.824243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.824616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.824648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.825049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.821 [2024-12-05 14:19:05.825081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.821 qpair failed and we were unable to recover it. 00:28:59.821 [2024-12-05 14:19:05.825439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.825482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.825833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.825863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.826227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.826256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.826695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.826728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.827110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.827141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.827492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.827523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.827870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.827900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.828251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.828280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.828621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.828655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.829010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.829042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.829402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.829432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.829831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.829863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.830217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.830247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.830610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.830645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.831003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.831034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.831397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.831427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.831794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.831826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.832192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.832224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.832577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.832610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.832965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.832995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.833347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.833380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.833752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.833783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.834149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.834178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.834520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.834550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.834905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.834934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.835308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.835337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.835706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.835735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.836110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.836139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.836503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.836532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.836908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.836937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.837293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.837328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.837719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.837750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.838119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.838148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.838511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.838541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.838906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.838936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.839303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.839332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.839705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.839735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.840096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.822 [2024-12-05 14:19:05.840124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.822 qpair failed and we were unable to recover it. 00:28:59.822 [2024-12-05 14:19:05.840480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.840511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.840886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.840915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.841274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.841303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.841683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.841714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.842076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.842104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.842476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.842507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.842857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.842886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.843260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.843290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.843618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.843649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.844005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.844034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.844398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.844426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.844735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.844765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.845104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.845134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.845481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.845512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.845855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.845885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.846262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.846291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.846604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.846634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.846868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.846900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.847260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.847290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.847650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.847681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.848041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.848070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.848435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.848475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.848828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.848857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.849204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.849234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.849603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.849635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.849991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.850020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.850377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.850405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.850778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.850808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.851172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.851200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.851557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.851586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.851943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.851972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.852339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.852368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.852711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.852742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.853102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.853131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.853475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.853508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.853884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.853913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.854278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.854306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.854579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.823 [2024-12-05 14:19:05.854609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.823 qpair failed and we were unable to recover it. 00:28:59.823 [2024-12-05 14:19:05.854956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.854984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.855354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.855383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.855750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.855780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.856132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.856161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.856521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.856551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.856922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.856951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.857310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.857339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.857696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.857727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.858075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.858106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.858470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.858502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.858786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.858815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.859159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.859187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.859548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.859579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.859947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.859975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.860356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.860384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.860755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.860785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.861143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.861172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.861536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.861567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.861794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.861824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.862051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.862083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.862452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.862493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.862933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.862969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.863302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.863333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.863592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.863623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.863972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.864001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.864362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.864391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.864808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.864839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.865191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.865221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.865578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.865609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.865977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.866006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.866344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.866372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.866736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.866766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.867120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.867149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.867512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.867542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.867895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.867924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.868284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.868314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.868646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.868675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.868942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.824 [2024-12-05 14:19:05.868970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.824 qpair failed and we were unable to recover it. 00:28:59.824 [2024-12-05 14:19:05.869329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.869358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.869709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.869739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.870109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.870138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.870370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.870400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.870762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.870793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.871150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.871186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.871595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.871626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.871993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.872023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.872377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.872407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.872771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.872801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.873158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.873188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.873531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.873563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.873908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.873945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.874316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.874345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.874723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.874754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.875118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.875147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.875526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.875558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.875919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.875947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.876306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.876334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.876631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.876661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.877033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.877064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.877313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.877345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.877597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.877628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.878023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.878058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.878398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.878428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.878800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.878830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.879260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.879289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.879611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.879643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.879982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.880011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.880367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.880396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.880648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.880678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.881057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.881087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.881476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.881507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.881749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.881778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.882141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.882169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.882544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.882574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.883000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.825 [2024-12-05 14:19:05.883029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.825 qpair failed and we were unable to recover it. 00:28:59.825 [2024-12-05 14:19:05.883371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.883401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.883771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.883801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.884159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.884188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.884611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.884643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.884997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.885025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.885285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.885314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.885683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.885714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.886127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.886156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.886512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.886544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.886912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.886941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.887310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.887339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.887705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.887735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.888109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.888138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.888502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.888533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.888876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.888904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.889272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.889301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.889671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.889701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.889962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.889990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.890335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.890365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.890711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.890742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.890988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.891020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.891363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.891393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.891753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.891783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.892144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.892173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.892530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.892963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.893324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.893359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.893726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.893756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.894106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.894135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.894478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.894507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.894881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.894911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.895281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.895310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.895685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.895714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.896048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.896077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.896436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.896477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.896843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.826 [2024-12-05 14:19:05.896873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.826 qpair failed and we were unable to recover it. 00:28:59.826 [2024-12-05 14:19:05.897124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.897153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.897491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.897522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.897770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.897799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.898180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.898210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.898573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.898603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.898967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.898996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.899352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.899380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.899739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.899769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.900128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.900157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.900509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.900539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.900796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.900824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.901205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.901233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.901513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.901543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.901887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.901915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.902283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.902312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.902690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.902721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.903086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.903114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.903473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.903505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.903868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.903898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.904267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.904296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.904644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.904674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.905029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.905058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.905416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.905446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.905836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.905867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.906224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.906252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.906637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.906668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.907019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.907049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.907293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.907321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.907699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.907730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.908064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.908094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.908445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.908488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.908817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.908854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.909186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.909215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.909574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.909605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.909969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.909997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.910306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.910335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.910678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.910709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.911046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.827 [2024-12-05 14:19:05.911075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.827 qpair failed and we were unable to recover it. 00:28:59.827 [2024-12-05 14:19:05.911445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.911482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.911726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.911759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.912130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.912159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.912504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.912534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.912916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.912945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.913298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.913326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.913695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.913726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.914085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.914114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.914479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.914510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.914906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.914935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.915266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.915295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.915531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.915564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.915896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.915935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.916160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.916189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.916557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.916588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.916958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.916987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.917329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.917358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.917696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.917728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.918106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.918494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.918525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.918895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.918924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.919264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.919293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.919647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.919678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.920035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.920064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.920439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.920476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.920858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.920887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.921245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.921274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.921640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.921670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.922038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.922067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.922414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.922443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.922851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.922881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.923233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.923263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.923682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.923719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.924067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.924096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.924481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.924511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.924858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.924887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.925261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.925290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.925662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.828 [2024-12-05 14:19:05.925693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.828 qpair failed and we were unable to recover it. 00:28:59.828 [2024-12-05 14:19:05.926131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.926160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.926517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.926548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.926913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.926942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.927303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.927333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.927587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.927618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.927965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.927995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.928362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.928390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.928783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.928813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.929155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.929187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.929572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.929603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.929950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.929980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.930359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.930388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.930742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.930773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.931127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.931156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.931520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.931552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.931918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.931947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.932309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.932338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.932707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.932737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.933116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.933145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.933510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.933540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.933899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.933927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.934296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.934326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.934684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.934714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.935092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.935121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.935484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.935515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.935949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.935978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.936209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.936242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.936591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.936622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.936964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.936995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.937367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.937396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.937768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.937799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.938158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.938187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.938531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.938562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.938919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.938948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.939311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.939347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.939700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.939731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.940159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.940188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.940547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.829 [2024-12-05 14:19:05.940577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.829 qpair failed and we were unable to recover it. 00:28:59.829 [2024-12-05 14:19:05.940819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.940848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.941219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.941249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.941489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.941519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.941891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.941921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.942194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.942524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.942555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.942758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.942790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.943200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.943230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.943588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.943619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.943989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.944019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.944381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.944412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.944761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.944791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.945053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.945082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.945428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.945474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.945838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.945868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.946236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.946265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.946648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.946679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.947038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.947067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.947411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.947440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.947870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.947900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.948138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.948171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.948525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.948556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.948913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.948942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.949308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.949337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.949711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.949742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.950099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.950128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.950374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.950405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.950617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.950648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.951007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.951036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.951390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.951419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.951788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.951819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.952177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.952207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.952471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.952504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.952909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.952938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.830 [2024-12-05 14:19:05.953291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.830 [2024-12-05 14:19:05.953322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.830 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.953687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.953719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.954077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.954113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.954471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.954502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.954871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.954900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.955240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.955269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.955625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.955655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.955991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.956020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.956389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.956419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.956694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.956727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.957116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.957145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.957512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.957544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.957911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.958199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.958228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.958481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.958514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.958872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.958903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.959261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.959291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.959546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.959576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.959944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.959974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.960330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.960360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.960711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.960740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.961103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.961133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.961502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.961533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.961894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.961924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.962290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.962320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.962604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.962635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.962973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.963002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.963364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.963393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.963752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.963782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.964143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.964172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.964528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.964560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.964976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.965005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.965349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.965378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.965737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.965768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.966128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.966157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.966520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.966550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.966915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.966945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.967311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.967340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.831 [2024-12-05 14:19:05.967706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.831 [2024-12-05 14:19:05.967736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.831 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.968079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.968108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.968447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.968485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.968851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.968879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.969289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.969324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.969688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.969719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.969945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.969975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.970317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.970347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.970792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.970822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.971190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.971220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.971597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.971628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.971983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.972013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.972377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.972406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.972815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.972845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.973200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.973230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.973576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.973608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.973969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.973998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.974353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.974381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.974743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.975113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.975143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.975519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.975549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.975773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.975804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.976235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.976265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.976600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.976631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.977000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.977030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.977393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.977423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.977727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.977758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.978120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.978149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.978514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.978572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.978940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.978970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.979214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.979248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.979612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.979644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.979932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.979962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.980323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.980353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.980699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.980729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.981079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.981109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.981343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.981373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.981602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.981636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.982012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.832 [2024-12-05 14:19:05.982042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-12-05 14:19:05.982293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.982324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.982693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.982724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.983075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.983106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.983474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.983505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.983882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.983911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.984274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.984310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.984658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.984688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.985044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.985073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.985438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.985484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.985824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.985854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.986201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.986231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.986609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.986852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.986882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.987105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.987137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.987295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.987328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.987573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.987604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.987984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.988013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.988250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.988282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.988622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.988654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.989012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.989042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.989437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.989815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.989847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.990185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.990215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.990598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.990956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.990986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.991348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.991377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.991823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.991853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.992095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.992124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.992481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.992512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.992884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.992913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.993275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.993303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.993550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.993584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.993992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.994022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.994368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.994397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.994773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.994805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.995160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.995191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.995554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.995585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.995956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.833 [2024-12-05 14:19:05.995985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-12-05 14:19:05.996345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.996375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.996759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.996789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.997203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.997232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.997571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.997611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.997942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.997971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.998329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.998359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.998712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.998743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.999114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.999150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.999505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.999536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:05.999897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:05.999926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.000265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.000294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.000693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.000724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.001103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.001134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.001512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.001543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.001893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.001923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.002299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.002328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.002599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.002629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.003060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.003089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.003422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.003452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.003818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.003848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.004221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.004250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.004607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.004639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.004987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.005016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.005370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.005399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.005756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.005787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.006147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.006176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.006535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.006566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.006827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.006860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.007207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.007237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.007605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.007636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.007979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.008009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.008386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.008415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.008796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.008827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.009186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.009214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.009584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.009616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.009986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.010015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.010382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.010410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.010659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.834 [2024-12-05 14:19:06.010693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-12-05 14:19:06.011066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.011096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.011469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.011499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.011884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.011915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.012274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.012305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.012642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.012672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.013013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.013044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.013386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.013417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.013707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.013738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.014136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.014165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.014518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.014951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.014981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.015338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.015369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.015730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.015762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.016122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.016152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.016513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.016544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.016911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.016940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.017303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.017333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.017699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.017730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.018102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.018132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.018487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.018520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.018905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.018935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.019323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.019354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.019711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.019743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.020098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.020128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.020366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.020398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.020760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.020792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.021156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.021187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.021451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.021494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.021848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.021880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.022125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.022155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.022512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.022543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.022913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.022945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.023216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.023245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.023595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.023626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.835 [2024-12-05 14:19:06.023994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.835 [2024-12-05 14:19:06.024025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.835 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.024381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.024411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.024689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.024722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.025072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.025103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.025355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.025385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.025723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.025755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.026112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.026141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.026510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.026540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.026938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.026968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.027327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.027356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.027720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.027750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.028047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.028078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.028427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.028468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.028817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.028847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.029094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.029124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.029368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.029404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.029804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.029837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.030189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.030218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.030582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.030614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.030992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.031022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.031384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.031413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.031813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.031844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.032256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.032285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.032635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.032665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.033026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.033057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.033424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.033466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.033852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.034246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.034279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.034567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.034600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.034958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.034989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.035335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.035367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.035708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.035739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.036100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.036133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.036478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.036510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.036859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.036889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.037228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.037267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.037617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.037649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.037905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.037935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.836 [2024-12-05 14:19:06.038288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.836 [2024-12-05 14:19:06.038317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.836 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.038601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.038632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.039000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.039032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.039369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.039398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.039782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.039816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.040175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.040205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.040569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.040600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.040974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.041004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.041440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.041484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.041804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.041835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.042231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.042261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.042595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.042628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.042986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.043018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.043380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.043412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.043782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.043813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.044184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.044214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.044577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.044611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.044972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.045002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.045252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.045281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.045639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.045671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.046048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.046077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.046447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.046502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.046908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.046937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.047289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.047319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.047690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.047722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.048070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.048100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.048471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.048502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.048858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.048890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.049243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.049273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.049631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.049662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.050051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.050081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.050444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.050497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.050845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.050875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.051215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.051252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.051611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.051642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.052003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.052034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.052389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.052418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.052802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.052834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.053197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.837 [2024-12-05 14:19:06.053227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.837 qpair failed and we were unable to recover it. 00:28:59.837 [2024-12-05 14:19:06.053481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.053515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.053881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.053910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.054130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.054161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.054401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.054433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.054826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.054860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.055230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.055267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.055629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.055663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.056021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.056054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.056296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.056328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.056696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.056728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.057102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.057134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.057365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.057397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.057772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.057806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.058162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.058193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.058552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.058586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.058927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.058956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.059309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.059341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.059684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.059716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.060109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.060139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.060496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.060527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.060889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.060920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.061292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.061322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.061693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.061725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.062062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.062091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.062442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.062487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.062823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.062853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.063209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.063242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.063616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.063651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.064000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.064030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.064390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.064421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.064795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.064830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.065128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.065157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.065400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.065434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.065780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.065810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.066180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.066209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.066578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.066608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.066849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.066882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.067267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.838 [2024-12-05 14:19:06.067297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.838 qpair failed and we were unable to recover it. 00:28:59.838 [2024-12-05 14:19:06.067640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.067671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.068040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.068070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.068431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.068472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.068818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.068848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.069224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.069253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.069597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.069629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.070001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.070030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.070256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.070293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.070689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.070720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.071084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.071114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.071477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.071508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.071798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.071828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.072185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.072215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:28:59.839 [2024-12-05 14:19:06.072574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:59.839 [2024-12-05 14:19:06.072604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:28:59.839 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.072959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.072991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.073345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.073374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.073758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.074155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.074186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.074548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.074579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.074934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.074965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.075330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.075358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.075710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.075741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.112 [2024-12-05 14:19:06.076106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.112 [2024-12-05 14:19:06.076135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.112 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.076500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.076531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.076803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.076832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.077207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.077236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.077576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.077943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.077972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.078324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.078354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.078720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.078750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.079114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.079144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.079516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.079549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.079907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.079936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.080296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.080326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.080702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.080734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.081083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.081113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.081480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.081510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.081888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.081918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.082269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.082299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.082670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.082701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.083092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.083121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.083479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.083509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.083800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.084175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.084206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.084594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.084626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.084968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.084998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.085340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.085369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.085556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.085593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.085975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.086004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.086245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.086274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.086615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.086645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.087074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.087428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.087471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.087751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.087781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.088236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.088265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.088624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.088655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.089015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.089044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.089404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.089434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.089790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.089820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.090072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.113 [2024-12-05 14:19:06.090101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.113 qpair failed and we were unable to recover it. 00:29:00.113 [2024-12-05 14:19:06.090450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.090504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.090872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.090902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.091269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.091298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.091679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.091709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.092062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.092091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.092497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.092528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.092871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.092901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.093267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.093296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.093633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.093664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.094023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.094053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.094413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.094443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.094813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.094843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.095189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.095219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.095582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.095613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.095867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.095897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.096140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.096169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.096527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.096558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.096927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.096956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.097399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.097428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.097777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.097807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.098170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.098201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.098549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.098580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.098950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.098979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.099340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.099369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.114 qpair failed and we were unable to recover it. 00:29:00.114 [2024-12-05 14:19:06.099736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.114 [2024-12-05 14:19:06.099766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.100113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.100142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.100511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.100542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.100798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.100840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.101185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.101214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.101573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.101604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.101970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.102000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.102356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.102385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.102745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.102775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.103120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.103149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.103515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.103545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.103913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.103942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.104323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.104352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.104642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.104672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.105022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.105051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.105420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.105449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.105823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.105853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.106199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.106229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.106587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.106618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.106978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.107008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.107256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.107289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.107662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.107693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.108054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.108083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.108426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.108471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.108838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.108868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.109226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.109256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.109600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.109631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.109888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.109917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.110275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.110304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.139 qpair failed and we were unable to recover it. 00:29:00.139 [2024-12-05 14:19:06.110652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.139 [2024-12-05 14:19:06.110684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.111039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.111071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.111408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.111441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.111829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.111860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.112234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.112263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.112623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.112652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.113026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.113058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.113422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.113450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.113822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.113857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.114212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.114243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.114604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.114636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.115043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.115073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.115356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.115384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.115776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.115807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.116144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.116181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.116619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.116649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.117009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.117038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.117403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.117432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.117839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.117869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.118230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.118259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.118605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.118636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.118987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.119016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.119495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.119527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.119900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.119930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.120300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.120328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.120675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.120706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.121050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.121079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.121438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.121818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.121849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.122099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.122128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.122491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.122526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.122917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.122947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.123309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.123339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.123687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.123718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.124078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.124114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.124487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.124518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.124876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.124906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.140 [2024-12-05 14:19:06.125145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.140 [2024-12-05 14:19:06.125174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.140 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.125533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.125563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.125943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.125972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.126335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.126363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.126727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.126758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.127093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.127124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.127489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.127521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.127869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.127899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.128325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.128355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.128697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.128732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.129088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.129117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.129418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.129448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.129795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.129824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.130198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.130227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.130591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.130621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.130990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.131019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.131388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.131417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.131841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.131880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.132236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.132265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.132623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.132654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.133017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.133046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.133408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.133437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.133783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.133812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.134183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.134214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.134582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.134613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.134969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.134998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.135353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.135382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.135628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.135662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.136034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.136064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.136427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.136469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.136828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.136856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.137191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.137221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.141 [2024-12-05 14:19:06.137580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.141 [2024-12-05 14:19:06.137621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.141 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.138002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.138031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.138391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.138421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.138799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.138829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.139162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.139192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.139551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.139582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.139953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.139982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.140340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.140717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.140747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.141104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.141135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.141494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.141525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.141884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.141913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.142306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.142660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.142690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.143056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.143085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.143476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.143508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.143756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.143785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.144034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.144063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.144314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.144343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.144714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.144744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.145107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.145136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.145552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.145583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.145922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.145952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.146318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.146347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.146741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.146771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.147128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.147165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.147488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.147519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.147859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.147890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.148252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.148281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.148638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.148669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.149029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.149058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.149424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.149453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.149837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.149866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.150249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.150726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.150756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.151120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.151159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.151489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.151519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.151887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.142 [2024-12-05 14:19:06.151917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.142 qpair failed and we were unable to recover it. 00:29:00.142 [2024-12-05 14:19:06.152267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.152296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.152686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.152716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.153052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.153082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.153434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.153473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.153874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.153903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.154264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.154293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.154557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.154588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.154922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.154953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.155314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.155342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.155686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.155718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.156068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.156097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.156453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.156493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.156828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.156856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.157223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.157252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.157611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.157642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.157996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.158024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.158429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.158468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.158894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.158923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.159165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.159196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.159579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.159611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.159990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.160020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.160382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.160410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.160811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.160842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.161207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.161237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.161609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.161639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.161999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.162028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.162387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.162416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.162847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.163231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.163260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.163604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.163635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.163998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.164027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.164387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.164416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.164720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.164750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.165083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.165113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.165476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.165508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.165865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.165894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.166135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.166167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.143 [2024-12-05 14:19:06.166527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.143 [2024-12-05 14:19:06.166557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.143 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.166918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.166947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.167318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.167347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.167701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.167731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.168095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.168124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.168476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.168506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.168875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.168903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.169266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.169295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.169665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.170083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.170112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.170474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.170505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.170865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.170894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.171276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.171304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.171666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.171697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.172070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.172098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.172476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.172507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.172865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.172895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.173254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.173284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.173641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.173672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.174036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.174067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.174417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.174445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.174788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.174818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.175076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.175106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.175374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.175404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.175770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.175800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.176153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.176183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.176529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.176560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.176934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.176964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.177306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.177335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.177696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.177727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.178085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.178126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.178480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.178510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.178866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.178896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.179269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.179300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.179632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.179661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.180029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.180058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.180431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.180472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.180804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.180833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.144 qpair failed and we were unable to recover it. 00:29:00.144 [2024-12-05 14:19:06.181212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.144 [2024-12-05 14:19:06.181242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.181602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.181634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.181999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.182028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.182391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.182420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.182779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.182810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.183160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.183188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.183575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.183605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.184021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.184050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.184413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.184443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.184821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.184851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.185222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.185251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.185624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.185656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.186016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.186045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.186407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.186437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.186840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.186870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.187242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.187271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.187625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.187655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.188006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.188035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.188399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.188428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.188802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.188833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.189193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.189222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.189464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.189496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.189861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.189891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.190231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.190261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.190630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.190660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.191032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.191060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.191424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.191453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.192229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.192258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.192597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.192628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.192991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.193020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.193379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.193408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.193847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.193883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.194219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.194249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.194598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.194630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.194890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.194918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.195266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.195297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.195644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.145 [2024-12-05 14:19:06.195675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.145 qpair failed and we were unable to recover it. 00:29:00.145 [2024-12-05 14:19:06.196045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.196075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.196434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.196477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.196816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.196844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.197100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.197129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.197486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.197517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.197885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.197914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.198135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.198163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.198561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.198592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.198930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.198960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.199328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.199357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.199704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.199736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.200097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.200126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.200500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.200530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.200900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.200929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.201322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.201351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.201689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.201720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.202061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.202090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.202521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.202551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.202889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.202917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.203298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.203328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.203686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.203717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.204074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.204103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.204444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.204490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.204821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.204851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.205212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.205241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.205596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.205626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.205984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.206014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.206373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.206402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.206774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.206805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.207167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.207404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.207436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.207792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.207823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.208187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.208216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.146 [2024-12-05 14:19:06.208579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.146 [2024-12-05 14:19:06.208610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.146 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.208966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.209001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.209427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.209463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.209870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.209900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.210265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.210295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.210662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.210693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.211060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.211089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.211453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.211491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.211766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.211795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.212179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.212207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.212574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.212605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.212977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.213006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.213377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.213406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.213751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.213781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.214147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.214177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.214532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.214562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.214805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.214837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.215190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.215219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.215572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.215603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.215982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.216011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.216367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.216396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.216831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.216862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.217224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.217253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.217623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.217654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.218017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.218046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.218333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.218361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.218695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.218725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.219099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.219128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.219388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.219417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.219786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.219817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.220188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.220217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.220468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.220498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.220795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.220825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.221157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.221187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.221522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.221553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.221952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.221981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.222346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.222376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.222715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.147 [2024-12-05 14:19:06.222745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.147 qpair failed and we were unable to recover it. 00:29:00.147 [2024-12-05 14:19:06.223105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.223134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.223499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.223529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.223903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.223931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.224288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.224322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.224686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.224716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.225077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.225105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.225451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.225493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.225778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.225807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.226148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.226177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.226536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.226567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.226933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.226961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.227320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.227349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.227615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.227644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.228011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.228040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.228412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.228441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.228823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.228852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.229236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.229265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.229608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.229639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.230010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.230038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.230409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.230438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.230858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.230888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.231248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.231277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.231485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.231517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.231882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.231911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.232271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.232300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.232675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.232705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.233046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.233076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.233330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.233358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.233705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.233736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.234104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.234133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.234510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.234540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.234910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.234939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.235290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.235319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.235702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.235734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.236096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.236125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.236492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.236522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.236894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.236923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.148 [2024-12-05 14:19:06.237283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.148 [2024-12-05 14:19:06.237312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.148 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.237683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.237713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.238072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.238101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.238471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.238502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.238903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.238932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.239266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.239295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.239556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.239587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.240029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.240058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.240413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.240442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.240901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.240931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.241297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.241326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.241693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.241724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.242061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.242091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.242441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.242479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.242822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.242852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.243217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.243246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.243620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.243651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.244023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.244053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.244424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.244452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.244868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.244898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.245336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.245366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.245505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.245539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.245899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.245929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.246224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.246253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.246616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.246647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.247004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.247395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.247424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.247795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.247827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.248207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.248236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.248590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.248630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.248877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.248907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.249276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.249304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.249685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.249714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.250065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.250102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.250462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.250494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.250826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.250857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.251214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.251244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.251604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.251634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.149 [2024-12-05 14:19:06.252000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.149 [2024-12-05 14:19:06.252029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.149 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.252389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.252417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.252797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.252827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.253209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.253238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.253580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.253611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.253984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.254013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.254388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.254417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.254853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.254884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.255247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.255275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.255638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.255669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.256041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.256070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.256442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.256479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.256827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.256857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.257221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.257250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.257609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.257639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.258005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.258035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.258396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.258425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.258793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.258824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.259183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.259212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.259547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.259577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.259941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.259971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.260338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.260367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.260743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.260774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.261136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.261166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.261521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.261551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.261925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.261953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.262194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.262228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.262575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.262608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.262965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.262994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.263358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.263387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.263759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.263791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.264137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.264167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.264546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.264577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.264828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.264862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.265261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.265624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.265663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.266034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.150 [2024-12-05 14:19:06.266064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.150 qpair failed and we were unable to recover it. 00:29:00.150 [2024-12-05 14:19:06.266430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.266484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.266858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.266887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.267260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.267291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.267755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.267787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.268146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.268175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.268547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.268578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.268938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.268967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.269385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.269415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.269829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.269861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.270224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.270253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.270618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.270649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.271015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.271046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.271305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.271337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.271578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.271609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.271992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.272022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.272376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.272406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.272801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.272833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.273197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.273229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.273562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.273592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.273966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.273995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.274358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.274388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.274826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.274857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.275196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.275227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.275576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.275608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.275862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.275890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.276130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.276163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.276521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.276553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.276927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.276957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.277327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.277356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.277806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.277837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.278194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.278224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.278570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.278602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.278975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.279005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.151 [2024-12-05 14:19:06.279350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.151 [2024-12-05 14:19:06.279382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.151 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.279723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.279756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.280113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.280143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.280505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.280538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.280939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.280969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.281360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.281397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.281763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.281794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.282150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.282181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.282544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.282935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.282965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.283311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.283341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.283586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.283616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.283969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.284000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.284358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.284388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.284748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.284779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.285140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.285173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.285531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.285562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.285938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.285968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.286322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.286354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.286715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.286747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.287127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.287158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.287519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.287550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.287927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.287957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.288192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.288222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.288626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.288658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.288916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.288947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.289325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.289679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.289711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.290061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.290092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.290320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.290352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.290723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.290755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.291095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.291127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.291485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.291517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.291882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.291915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.292167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.292197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.292552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.292584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.292915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.292945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.152 [2024-12-05 14:19:06.293294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.152 [2024-12-05 14:19:06.293325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.152 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.293562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.293596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.293964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.293994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.294433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.294475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.294833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.294866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.295233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.295264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.295627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.295659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.295945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.295976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.296382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.296729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.296760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.296937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.296966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.297358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.297388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.297734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.297765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.298166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.298197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.298572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.298936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.298967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.299302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.299331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.299691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.299723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.300059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.300088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.300452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.300495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.300848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.300880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.301145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.301174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.301526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.301557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.301906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.301936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.302163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.302194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.302577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.302609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.302909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.302939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.303292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.303321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.303694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.303727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.304085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.304118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.304474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.304504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.304865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.304898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.305248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.305282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.305639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.305671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.305937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.305966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.306232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.306262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.306634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.306665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.307030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.153 [2024-12-05 14:19:06.307062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.153 qpair failed and we were unable to recover it. 00:29:00.153 [2024-12-05 14:19:06.307403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.307432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.307801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.307835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.308051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.308079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.308448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.308867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.308900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.309153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.309182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.309570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.309603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.309907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.309938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.310300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.310330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.310685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.310718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.311143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.311180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.311532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.311562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.311939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.311971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.312333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.312363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.312796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.312828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.313168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.313198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.313583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.313614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.313985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.314017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.314370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.314399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.314791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.314824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.315177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.315208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.315561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.315591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.315973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.316002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.316364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.316395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.316782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.316813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.317204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.317235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.317599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.317630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.317996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.318027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.318383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.318413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.318823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.318854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.319210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.319242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.319617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.319648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.319930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.319960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.320333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.320362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.320746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.320776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.321133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.321164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.321533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.321566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.154 qpair failed and we were unable to recover it. 00:29:00.154 [2024-12-05 14:19:06.321856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.154 [2024-12-05 14:19:06.321885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.322254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.322283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.322514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.322547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.322914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.322943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.323303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.323335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.323666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.323697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.324034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.324064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.324436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.324478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.324867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.324897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.325154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.325187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.325581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.325613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.325991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.326381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.326410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.326787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.326827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.327175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.327205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.327439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.327485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.327852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.328251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.328281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.328635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.328666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.329016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.329055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.329406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.329435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.329815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.329845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.330205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.330234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.330671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.330704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.331042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.331073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.331336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.331369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.331745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.331778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.332142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.332173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.332423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.332452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.332895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.332924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.333281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.333312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.333691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.333723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.334085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.334115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.155 [2024-12-05 14:19:06.334477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.155 [2024-12-05 14:19:06.334510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.155 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.334747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.334778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.335142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.335172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.335538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.335572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.335937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.335969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.336328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.336358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.336599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.336629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.336993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.337026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.337389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.337418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.337777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.337809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.338147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.338178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.338534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.338565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.338910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.338948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.339364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.339392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.339723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.339754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.340119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.340148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.340510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.340541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.340916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.340944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.341306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.341335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.341601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.341631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.341987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.342023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.342386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.342415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.342779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.342810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.343035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.343067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.343437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.343478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.343819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.343848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.344205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.344234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.344573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.344604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.344970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.344999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.345300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.345329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.345706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.345737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.346069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.346098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.346490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.346521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.346857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.346887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.347125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.347156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.347516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.347549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.347896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.347925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.348168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.348197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.156 [2024-12-05 14:19:06.348561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.156 [2024-12-05 14:19:06.348592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.156 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.348848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.348876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.349229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.349258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.349606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.349637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.349999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.350028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.350397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.350426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.350724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.350756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.351013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.351042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.351400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.351429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.351815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.351845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.352175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.352205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.352606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.352637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.352974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.353002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.353399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.353428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.353786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.353816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.354180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.354209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.354580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.354611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.354992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.355022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.355272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.355302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.355649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.355680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.355932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.355961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.356328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.356357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.356690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.356728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.357083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.357112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.357473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.357503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.357862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.357890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.358263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.358292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.358687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.358718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.358976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.359008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.359366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.359395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.359748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.359778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.360135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.360164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.360534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.360565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.360901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.360930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.361305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.361334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.361684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.361715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.362079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.362108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.362343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.362374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.157 [2024-12-05 14:19:06.362726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.157 [2024-12-05 14:19:06.362758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.157 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.363095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.363123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.363536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.363567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.363907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.363937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.364301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.364331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.364684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.364715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.365078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.365106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.365477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.365507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.365857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.365886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.366293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.366322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.366697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.366728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.367096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.367126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.367474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.367506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.367888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.367918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.368271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.368301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.368573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.368604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.368949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.368979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.369347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.369376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.369714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.369745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.370108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.370506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.370536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.370902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.370931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.371295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.371325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.371680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.371711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.372073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.372108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.372445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.372487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.372877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.372906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.373255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.373284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.373626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.373657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.373873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.373904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.374270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.374299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.374674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.374705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.375072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.375101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.375473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.375503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.375842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.375871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.376251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.376280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.376644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.376675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.377039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.158 [2024-12-05 14:19:06.377068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.158 qpair failed and we were unable to recover it. 00:29:00.158 [2024-12-05 14:19:06.377413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.377442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.377792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.377823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.378189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.378218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.378578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.378610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.378976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.379005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.379372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.379401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.379771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.379802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.380169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.380198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.380573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.380603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.380862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.380890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.381250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.381279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.381539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.381570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.381920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.381949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.382325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.382355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.382715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.382746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.383087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.383116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.383521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.383551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.383902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.383939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.384223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.384252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.384604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.384635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.384994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.385023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.385372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.385401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.385838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.385868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.386234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.386271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.386633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.386993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.387022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.387384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.387419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.387817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.387848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.388218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.388247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.388499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.388532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.388861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.388890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.389241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.389270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.389517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.389547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.389946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.389975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.390338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.390368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.390733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.390765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.391105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.391134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.391505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.159 [2024-12-05 14:19:06.391536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.159 qpair failed and we were unable to recover it. 00:29:00.159 [2024-12-05 14:19:06.391890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.391919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.392275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.392305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.392678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.392709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.393123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.393154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.393491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.393521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.393866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.393895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.394255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.394285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.394685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.394716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.395072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.395100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.395467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.395500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.395876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.395907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.396258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.396286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.160 [2024-12-05 14:19:06.396642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.160 [2024-12-05 14:19:06.396673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.160 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.397026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.397059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.397411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.397443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.397814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.397846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.398179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.398209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.398581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.398611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.398964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.398994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.399371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.399401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.399762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.399793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.400154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.400183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.400475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.400506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.400799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.400830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.401249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.401278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.401523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.401555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.401813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.401843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.402100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.402130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.402492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.402530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.402755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.402786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.403143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.403173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.431 [2024-12-05 14:19:06.403537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.431 [2024-12-05 14:19:06.403568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.431 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.403918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.403947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.404308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.404338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.404713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.404744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.405110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.405140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.405494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.405524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.405891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.405921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.406279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.406309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.406688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.406719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.407093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.407123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.407474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.407505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.407875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.407904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.408263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.408292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.408671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.408701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.409056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.409085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.409453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.409492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.409862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.409891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.410227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.410256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.410649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.411007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.411036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.411399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.411429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.411800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.411830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.412190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.412219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.412480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.412512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.412894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.412924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.413284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.413313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.413649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.413679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.414043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.414073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.414442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.414482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.414724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.414755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.415118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.415147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.415516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.415548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.415899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.415929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.416280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.416309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.416546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.416578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.416957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.416986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.417360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.417389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.417751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.417789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.418133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.418162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.418522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.418554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.418917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.418946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.419226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.419254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.419608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.419639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.419974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.420004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.420372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.420402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.420787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.420818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.421065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.421094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.421443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.421480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.421831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.421861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.422218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.422247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.422610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.422640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.422889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.422921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.423276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.423306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.423546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.423578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.423945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.423975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.424333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.424363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.424735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.424765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.425123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.425152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.425586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.425617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.425975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.426004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.426371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.426400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.426781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.426812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.427172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.427201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.427474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.427506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.427926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.427956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.428324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.428353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.428721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.428752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.429105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.429134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.429558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.429589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.429930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.432 [2024-12-05 14:19:06.429960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.432 qpair failed and we were unable to recover it. 00:29:00.432 [2024-12-05 14:19:06.430326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.430356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.430694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.430726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.431094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.431123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.431498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.431528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.431884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.431913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.432273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.432302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.432631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.432662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.433023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.433058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.433395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.433425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.433797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.433828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.434188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.434218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.434579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.434608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.434977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.435006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.435256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.435285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.435650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.435680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.435912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.435942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.436292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.436323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.436555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.436589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.436987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.437018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.437367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.437396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.437748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.437778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.438140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.438170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.438511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.438541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.438908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.438936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.439293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.439322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.439691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.439722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.440092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.440121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.440564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.440595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.440968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.440997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.441374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.441404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.441769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.441800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.442159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.442187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.442545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.442576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.442833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.442861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.443212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.443248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.443599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.443637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.444004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.444033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.444388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.444417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.444696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.444727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.445087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.445117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.445496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.445528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.445878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.445908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.446269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.446298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.446678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.446708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.447087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.447116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.447406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.447437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.447803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.447832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.448192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.448221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.448578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.448608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.448959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.448988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.449350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.449378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.449811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.449842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.450203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.450232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.450491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.450521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.450872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.450902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.451256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.451286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.451636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.451665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.452031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.452060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.452427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.452484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.452861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.453262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.453292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.453519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.453552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.453957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.453987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.454412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.454442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.454773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.454805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.455171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.455201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.455567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.455598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.456040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.456070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.456322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.456351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.456629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.456660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.457041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.457071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.457431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.433 [2024-12-05 14:19:06.457472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.433 qpair failed and we were unable to recover it. 00:29:00.433 [2024-12-05 14:19:06.457823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.457852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.458188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.458218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.458583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.458620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.458971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.459000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.459361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.459391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.459780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.459810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.460146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.460176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.460531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.460562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.460925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.460955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.461296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.461325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.461559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.461592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.461948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.461978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.462344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.462373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.462724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.462755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.463129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.463158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.463519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.463550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.463961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.463990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.464232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.464261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.464640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.464671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.465039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.465068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.465420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.465449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.465724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.465753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.466122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.466152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.466522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.466553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.466909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.466937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.467278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.467307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.467695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.467725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.468092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.468128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.468450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.468491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.468881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.468910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.469269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.469298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.469678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.469709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.470073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.470102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.470478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.470509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.470867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.470896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.471259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.471289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.471550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.471580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.471964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.471993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.472355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.472384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.472825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.472857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.473222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.473252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.473517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.473547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.473945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.473980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.474337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.474368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.474736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.474766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.475129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.475159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.475566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.475933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.475962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.476322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.476351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.476711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.476741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.477104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.477134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.477397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.477425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.477801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.477831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.478209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.478238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.478602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.478632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.478885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.478918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.479287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.479317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.479693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.479725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.480050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.480078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.480415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.480444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.480895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.480925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.481277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.434 [2024-12-05 14:19:06.481306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.434 qpair failed and we were unable to recover it. 00:29:00.434 [2024-12-05 14:19:06.481670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.481701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.481956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.481984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.482333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.482363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.482732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.482762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.483127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.483157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.483498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.483529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.483914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.483943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.484303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.484333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.484689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.484719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.485060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.485090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.485477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.485508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.485890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.485918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.486280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.486309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.486686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.486716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.486969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.486997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.487355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.487384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.487754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.487785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.488225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.488254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.488577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.488607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.488985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.489015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.489248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.489286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.489682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.489714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.489970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.490000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.490301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.490348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.490736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.490780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.491192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.491604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.491646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.492049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.492089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.492476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.492525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.492944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.492984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.493356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.493394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.493796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.493837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.494261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.494303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.494591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.494635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.494915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.494956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.495355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.495395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.495789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.495830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.496221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.496301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.496492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.496534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.496950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.496997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.497392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.497433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.497864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.497905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.498303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.498341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.498760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.498804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.499205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.499253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.499633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.499674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.500059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.500099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.500486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.500527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.500961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.501376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.501424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.501779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.501826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.502199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.502246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.502643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.502689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.503075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.503121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.503511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.503558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.503951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.503995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.504296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.504339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.504734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.504778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.505163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.505196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.505596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.505641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.506018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.506068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.506435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.506475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.506822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.506847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.507214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.507238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.435 qpair failed and we were unable to recover it. 00:29:00.435 [2024-12-05 14:19:06.507610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.435 [2024-12-05 14:19:06.507636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.508008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.508031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.508396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.508421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.508788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.508813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.509173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.509197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.509535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.509560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.509800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.509826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.510181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.510204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.510571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.510597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.510990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.511421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.511444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.511824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.511849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.512215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.512240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.512629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.512655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.513022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.513048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.513419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.513449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.513830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.513860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.514224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.514257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.514609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.514642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.514888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.514917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.515279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.515311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.515680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.515712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.516090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.516120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.516277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.516309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.516689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.516721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.517087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.517117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.517503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.517548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.517949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.517981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.518342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.518374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.518729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.518761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.519112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.519143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.519496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.519528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.519878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.519910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.520256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.520287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.520596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.520628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.520991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.521022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.521379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.521416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.521810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.521842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.522205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.522235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.522480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.522511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.522876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.522905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.523260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.523289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.523646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.523679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.524039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.524069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.524513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.524544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.524914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.524944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.525304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.525333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.525678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.525711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.526056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.526085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.526425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.526468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.526869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.526899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.527263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.527669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.527701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.528066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.528096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.528464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.528496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.528869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.528899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.529257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.529286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.529646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.529678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.530027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.530056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.530300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.530328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.530699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.530729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.531065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.531096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.436 qpair failed and we were unable to recover it. 00:29:00.436 [2024-12-05 14:19:06.531334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.436 [2024-12-05 14:19:06.531369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.531785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.531817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.532177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.532206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.532590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.532623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.532997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.533027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.533383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.533412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.533818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.533850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.534206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.534235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.534501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.534532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.534924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.534954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.535285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.535314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.535694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.535725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.536088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.536119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.536404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.536434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.536897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.536933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.537264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.537293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.537632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.537663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.538024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.538056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.538307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.538340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.538698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.538729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.539086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.539118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.539479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.539511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.539900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.539930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.540294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.540324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.540791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.540825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.541165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.541196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.541546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.541579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.541968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.541998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.542360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.542390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.542755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.542788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.543141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.543174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.543524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.543554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.543947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.543977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.544340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.544370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.544741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.544773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.545024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.545057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.545475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.545508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.545882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.545915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.546277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.546311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.546543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.546576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.546945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.546975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.547338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.547372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.547739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.547773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.548127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.548159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.548519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.548555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.548801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.548831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.549189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.549221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.549581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.549612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.549979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.550010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.550377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.550407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.550784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.550816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.437 [2024-12-05 14:19:06.551250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.437 [2024-12-05 14:19:06.551281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.437 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.551522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.551554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.551903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.551933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.552283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.552320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.552683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.552715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.553052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.553081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.553436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.553504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.553907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.553941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.554176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.554210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.554577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.554609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.554961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.554993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.555432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.555475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.555803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.555836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.556175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.556206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.556575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.556607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.556861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.556892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.557149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.557180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.557567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.557599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.557970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.558001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.558362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.558394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.558750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.558783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.559142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.559176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.559532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.559564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.559919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.559950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.560314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.560345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.560724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.560758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.561111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.561144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.561512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.561543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.561909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.561942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.562299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.562329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.562705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.562741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.563078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.563110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.563474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.563506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.563857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.563886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.564252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.564285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.564588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.564619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.564998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.565027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.565386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.565417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.565823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.565854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.566214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.566244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.566605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.566637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.566994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.567025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.567390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.567422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.567820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.567861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.568200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.568229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.568581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.568614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.568848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.568880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.569232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.569263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.569625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.569656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.570016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.570046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.570390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.570421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.570691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.570721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.571075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.571106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.438 qpair failed and we were unable to recover it. 00:29:00.438 [2024-12-05 14:19:06.571477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.438 [2024-12-05 14:19:06.571509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.571879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.571909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.572247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.572277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.572637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.572669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.573030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.573060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.573432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.573485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.573833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.573863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.574156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.574185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.574534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.574565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.574912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.574943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.575306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.575335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.575759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.575789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.576152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.576184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.576528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.576558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.576915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.576945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.577191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.577220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.577573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.577603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.577968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.577999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.578361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.578391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.578749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.578780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.579141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.579171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.579551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.579589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.579946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.579976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.580331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.580362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.580703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.580734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.580984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.581015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.581380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.581417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.581761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.581793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.582195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.582226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.582584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.582616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.582982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.583018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.583376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.583405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.583762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.583794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.584167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.584196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.584571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.584602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.584990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.585021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.585387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.585417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.585796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.585827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.585970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.585998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.586401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.586430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.586841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.586871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.587214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.587246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.587612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.587643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.588003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.588032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.588386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.588416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.588686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.588718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.588995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.589025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.589359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.589388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.589745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.589777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.590149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.590180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.590543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.590574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.590928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.590968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.591341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.591370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.591740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.591771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.592151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.439 [2024-12-05 14:19:06.592181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.439 qpair failed and we were unable to recover it. 00:29:00.439 [2024-12-05 14:19:06.592437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.592476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.592731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.592761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.593126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.593156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.593527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.593557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.593923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.593952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.594329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.594359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.594700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.594732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.595072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.595101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.595467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.595499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.595848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.595876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.596210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.596239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.596579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.596610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.596988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.597017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.597279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.597307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.597502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.597537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.597902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.597939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.598291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.598321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.598687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.598718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.599063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.599092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.599472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.599501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.599856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.599885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.600233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.600262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.600515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.600548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.600922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.600951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.601311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.601340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.601710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.601741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.602107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.602136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.602503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.602533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.602895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.602926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.603299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.603331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.603613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.603643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.603992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.604021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.604380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.604409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.604776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.604807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.605214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.605244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.605602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.605634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.605992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.606020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.606380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.606412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.606803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.606835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.607192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.607222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.607589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.607620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.607990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.608020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.608387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.608418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.608796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.608828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.609169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.609199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.609564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.609595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.609968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.609996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.610362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.610391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.610750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.610784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.611122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.611152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.611518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.611549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.611893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.611922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.612284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.612313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.440 qpair failed and we were unable to recover it. 00:29:00.440 [2024-12-05 14:19:06.612589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.440 [2024-12-05 14:19:06.612619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.612998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.613026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.613389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.613426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.613859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.613889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.614245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.614274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.614539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.614569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.614941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.614971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.615332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.615362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.615718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.615749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.616119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.616149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.616520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.616551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.616799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.616831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.617060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.617092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.617475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.617507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.617837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.617868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.618235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.618263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.618528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.618559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.618947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.618977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.619336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.619364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.619728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.619761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.620095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.620124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.620489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.620519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.620886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.620914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.621283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.621312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.621679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.621710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.622069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.622099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.622438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.622481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.622854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.622883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.623247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.623276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.623633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.623666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.624080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.624109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.624476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.624507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.624865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.624898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.625256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.625288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.625449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.625490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.625862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.625894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.626276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.626629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.626659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.627021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.627050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.627294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.627323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.627554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.627587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.627972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.628001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.628377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.628413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.628695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.628726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.629104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.629133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.629528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.629887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.629915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.630262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.630291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.630662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.630693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.631087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.631115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.631479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.631510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.631871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.631901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.632235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.632263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.632614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.632645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.633004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.633034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.633488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.633520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.633789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.634186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.634215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.634582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.634613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.441 [2024-12-05 14:19:06.634866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.441 [2024-12-05 14:19:06.634895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.441 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.635275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.635305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.635681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.635712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.636089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.636118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.636474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.636504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.636871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.637270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.637299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.637443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.637486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.637844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.637875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.638325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.638354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.638586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.638630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.638925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.639313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.639343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.639688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.639719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.640089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.640118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.640486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.640517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.640809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.640838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.641181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.641211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.641480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.641510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.641870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.641900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.642264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.642293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.642639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.642671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.643026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.643058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.643399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.643430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.643842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.643873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.644259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.644290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.644555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.644586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.644984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.645015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.645393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.645425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.645707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.645738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.646087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.646117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.646477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.646509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.646862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.646892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.647264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.647294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.647678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.647709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.648063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.648092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.648448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.648492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.648831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.648860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.649231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.649261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.649518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.649548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.649947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.649976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.650350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.650379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.650739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.650770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.651131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.651159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.651518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.651549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.651908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.651938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.652294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.652323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.652703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.652735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.653089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.653119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.653366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.653399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.653781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.653820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.654198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.654229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.654495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.654527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.654801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.654831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.655212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.655241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.655607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.655637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.442 qpair failed and we were unable to recover it. 00:29:00.442 [2024-12-05 14:19:06.655993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.442 [2024-12-05 14:19:06.656023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.656280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.656309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.656676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.656706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.657074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.657103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.657346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.657380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.657624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.657659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.658020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.658049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.658407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.658437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.658903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.658934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.659285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.659315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.659681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.659712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.660069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.660098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.660472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.660503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.660906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.660935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.661287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.661315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.661693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.661725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.662075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.662103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.662475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.662506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.662869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.662898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.663262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.663290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.663678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.663709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.664083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.664112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.664352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.664384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.664726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.664758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.664998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.665029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.665395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.665424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.665838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.665869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.666249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.666279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.666632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.666662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.667023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.667053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.667392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.667422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.667829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.667860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.668209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.668237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.668584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.668615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.668994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.669030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.669403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.669441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.669813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.669843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.670206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.670235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.670598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.670628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.670977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.671006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.671357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.671387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.671740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.671770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.672201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.672231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.672466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.672500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.672867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.672896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.673255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.673285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.673634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.673665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.673806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.673835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.674213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.674243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.674611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.674642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.674997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.675027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.675293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.675322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.675684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.675714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.676074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.676104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.676466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.676497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.676873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.676902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.677266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.677296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.677592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.677624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.677962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.677992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.678350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.678379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.678743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.678773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.443 [2024-12-05 14:19:06.679145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.443 qpair failed and we were unable to recover it. 00:29:00.443 [2024-12-05 14:19:06.679508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.679540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.679789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.679820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.680149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.680180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.680529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.680560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.680927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.680956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.681325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.681354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.681696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.681729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.682091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.682120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.682480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.682511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.682890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.682919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.683283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.683312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.683689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.683720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.684005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.684041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.684378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.684407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.684779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.684809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.685046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.685078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.685428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.685478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.685870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.685899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.686245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.686274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.686645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.686677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.687035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.687065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.687428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.687466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.687817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.687847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.688105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.688134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.688375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.688408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.688831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.688862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.689117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.689146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.689491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.689522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.689883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.689911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.690287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.690315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.690598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.690631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.690998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.691027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.691387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.691417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.691782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.691813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.692146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.692175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.692546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.692578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.692973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.693002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.693364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.693393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.693742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.693773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.694129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.694159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.694526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.694557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.694936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.694966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.695205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.695237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.695590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.695621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.695990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.696019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.696382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.696411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.696810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.696841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.697209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.697238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.697607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.697638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.698013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.698042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.698401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.698432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.698859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.698889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.699249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.699651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.699682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.700043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.700072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.700475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.700887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.700916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.701192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.701220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.701564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.701595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.701940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.701970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.702326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.702356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.702711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.702743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.444 [2024-12-05 14:19:06.703072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.444 [2024-12-05 14:19:06.703102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.444 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.703476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.703507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.703860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.703888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.704222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.704252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.704605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.704637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.704996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.705025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.705381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.705411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.705771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.705801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.706167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.706197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.706589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.706619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.707020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.707050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.707401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.707431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.707809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.707838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.708177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.708207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.708574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.708604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.708963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.708992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.709342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.709370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.709734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.709765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.710117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.710147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.710510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.710538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.710907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.710936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.711301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.711330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.711694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.711724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.712056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.712085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.712447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.712489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.712791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.712819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.713176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.713205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.713568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.713600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.713849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.713881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.714321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.714352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.714693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.714735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.715058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.715089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.715442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.715483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.715858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.715887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.716250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.716278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.716514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.716544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.716921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.716949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.717317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.445 [2024-12-05 14:19:06.717346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.445 qpair failed and we were unable to recover it. 00:29:00.445 [2024-12-05 14:19:06.717712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.717743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.717985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.718017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.718266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.718297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.718638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.718670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.719038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.719067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.719298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.719326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.719671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.719703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.720105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.720134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.720492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.720523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.720852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.720881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.721239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.721267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.717 [2024-12-05 14:19:06.721527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.717 [2024-12-05 14:19:06.721559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.717 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.721946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.721975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.722336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.722364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.722732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.722763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.723131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.723160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.723525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.723555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.723983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.724012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.724365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.724395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.724777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.724807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.725162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.725191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.725554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.725585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.725950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.725981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.726311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.726340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.726677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.726707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.726943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.726973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.727293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.727324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.727678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.727708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.728067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.728096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.728501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.728532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.728910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.728939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.729277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.729305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.729662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.729701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.730047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.730076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.730432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.730473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.730843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.730873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.731239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.731267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.731627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.731656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.732013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.732042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.732415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.732444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.732793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.732822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.733179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.733208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.733639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.733670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.734015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.734044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.734385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.734414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.734777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.734808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.735171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.735200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.735540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.735570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.735949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.735979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.736346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.736376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.736757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.736787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.737152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.737182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.737537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.737568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.737931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.737961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.738297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.738326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.738679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.718 [2024-12-05 14:19:06.738710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.718 qpair failed and we were unable to recover it. 00:29:00.718 [2024-12-05 14:19:06.739052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.739081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.739435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.739474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.739820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.739849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.740198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.740227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.740498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.740528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.740892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.740921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.741283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.741312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.741679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.741711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.742075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.742104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.742475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.742505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.742787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.742817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.743192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.743221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.743583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.743613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.744009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.744038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.744395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.744425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.744772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.744803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.745195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.745441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.745483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.745816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.745845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.746206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.746591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.746622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.746982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.747011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.747367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.747396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.747760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.747790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.748156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.748185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.748528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.748558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.748816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.748845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.749202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.749232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.749493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.749522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.749920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.749949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.750309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.750338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.750628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.750659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.750889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.750917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.751269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.751299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.751539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.751569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.751795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.751828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.752225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.752256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.752602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.752632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.753021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.753050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.753285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.753316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.753663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.753694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.754052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.754082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.754448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.754506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.754878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.754908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.719 [2024-12-05 14:19:06.755270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.719 [2024-12-05 14:19:06.755298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.719 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.755643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.755674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.756097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.756126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.756559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.756590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.756959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.756988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.757366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.757394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.757739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.757769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.757999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.758028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.758393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.758422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.758810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.758841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.759208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.759236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.759599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.759630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.760034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.760069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.760416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.760446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.760796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.760825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.761257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.761286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.761628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.761660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.762032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.762060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.762414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.762444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.762816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.762847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.763199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.763229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.763592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.763623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.763877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.763907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.764239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.764268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.764609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.764639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.765000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.765029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.765389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.765418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.765797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.765828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.766097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.766517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.766547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.766913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.766943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.767306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.767334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.767682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.767712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.768067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.768097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.768469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.768502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.768855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.768884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.769256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.769285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.769647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.769678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.770046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.770075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.770315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.770347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.770712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.770743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.771105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.771135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.771491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.771525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.771795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.771824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.720 [2024-12-05 14:19:06.772196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.720 [2024-12-05 14:19:06.772225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.720 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.772572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.772602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.772973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.773003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.773347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.773382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.773772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.773803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.774159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.774189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.774550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.774580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.774937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.774966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.775314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.775352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.775699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.775731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.776102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.776133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.776383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.776413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.776808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.776838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.777206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.777237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.777731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.777764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.778125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.778154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.778510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.778541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.778947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.778977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.779324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.779352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.779602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.779635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.779987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.780018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.780378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.780409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.780744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.780777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.781137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.781166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.781532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.781564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.781928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.781959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.782299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.782330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.782704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.782736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.783079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.783108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.783473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.783504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.783877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.783907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.784264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.784294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.784685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.784716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.785075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.785105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.785467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.785500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.785904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.785937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.786291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.786320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.786693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.786724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.787082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.721 [2024-12-05 14:19:06.787113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.721 qpair failed and we were unable to recover it. 00:29:00.721 [2024-12-05 14:19:06.787489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.787520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.787900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.787931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.788182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.788211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.788568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.788598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.788969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.788998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.789371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.789401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.789774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.789805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.790166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.790198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.790532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.790565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.790932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.790975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.791311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.791341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.791706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.791737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.792092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.792122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.792486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.792519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.792918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.792947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.793310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.793342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.793590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.793621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.793970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.794000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.794377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.794409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.794771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.794802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.795158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.795189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.795552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.795583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.795915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.795947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.796307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.796339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.796711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.796743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.797093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.797123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.797476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.797508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.797759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.797792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.798146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.798175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.798550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.798582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.798939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.798970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.799310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.799340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.799705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.799736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.800096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.800127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.800374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.800404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.800795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.800828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.801189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.801577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.801608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.801989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.802019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.802379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.802411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.802793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.802824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.803174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.803203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.803571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.803601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.803968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.804001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.804392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.722 qpair failed and we were unable to recover it. 00:29:00.722 [2024-12-05 14:19:06.804750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.722 [2024-12-05 14:19:06.804780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.805149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.805182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.805530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.805563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.805906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.805941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.806284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.806323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.806696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.806727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.807087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.807118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.807476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.807507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.807845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.807874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.808246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.808276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.808637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.808671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.810563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.810625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.810938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.810971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.811325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.811357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.811702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.811735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.812085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.812120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.812478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.812510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.812908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.812939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.813215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.813250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.813614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.813645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.814017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.814047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.814407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.814436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.814835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.814866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.815120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.815151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.815499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.815531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.815931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.815963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.816324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.816354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.816715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.816747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.817102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.817132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.817494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.817526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.817909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.817939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.818308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.818339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.818694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.818725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.818999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.819382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.819413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.819826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.819860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.820210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.820244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.820611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.820644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.821003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.821032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.821390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.821420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.821773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.821804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.822072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.822101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.822447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.822501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.822862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.822891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.723 qpair failed and we were unable to recover it. 00:29:00.723 [2024-12-05 14:19:06.823257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.723 [2024-12-05 14:19:06.823293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.823653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.823683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.824031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.824060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.824405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.824435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.824719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.824750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.825130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.825159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.825505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.825544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.825865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.825894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.826253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.826282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.826655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.827038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.827068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.827486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.827517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.827858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.827895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.828116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.828541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.828573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.828938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.828967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.829335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.829365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.829737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.829769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.830124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.830154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.830516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.830548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.830910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.830941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.831304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.831333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.831711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.831741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.832109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.832139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.832509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.832539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.832939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.832968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.833310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.833339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.833786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.833823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.834148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.834177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.834553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.834583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.834942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.834971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.835225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.835255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.835635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.835666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.836024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.836053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.836427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.836467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.836822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.836852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.837212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.837243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.837603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.837634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.838038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.838067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.838331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.838363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.838618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.838652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.839038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.839068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.839432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.839474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.839837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.839867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.840240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.724 [2024-12-05 14:19:06.840270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.724 qpair failed and we were unable to recover it. 00:29:00.724 [2024-12-05 14:19:06.840526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.840558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.840953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.840983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.841353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.841383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.841639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.841671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.842059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.842437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.842477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.842890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.842920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.843267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.843298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.843665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.843695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.844063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.844093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.844447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.844487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.844834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.844864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.845225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.845255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.845616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.845647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.846029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.846058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.846416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.846892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.846923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.847285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.847316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.847553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.847586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.848006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.848036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.848384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.848415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.848811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.848842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.849181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.849242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.849480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.849514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.849869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.849898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.850257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.850286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.850664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.850697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.851054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.851084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.851324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.851355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.851603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.851636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.852008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.852037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.852403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.852432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.852877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.852907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.853267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.853296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.853647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.853680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.853932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.853961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.854315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.854345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.854687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.854719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.854974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.855004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.855442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.855503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.725 qpair failed and we were unable to recover it. 00:29:00.725 [2024-12-05 14:19:06.855876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.725 [2024-12-05 14:19:06.855906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.856274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.856303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.856683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.856715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.857089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.857118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.857492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.857523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.857915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.857945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.858280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.858310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.858648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.858677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.859038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.859067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.859428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.859469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.859883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.859913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.860272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.860300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.860642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.860673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.860957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.860987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.861392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.861421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.861765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.861797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.862082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.862113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.862509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.862855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.862887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.863239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.863268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.863615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.863648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.864068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.864097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.864470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.864509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.864870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.864900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.865270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.865299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.865682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.865713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.866053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.866082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.866441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.866482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.866841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.866871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.867202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.867231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.867494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.867525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.867894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.867923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.868168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.868200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.868545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.868577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.868943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.868972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.869335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.869366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.869737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.869769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.870126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.870156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.870514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.870547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.870904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.870936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.871293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.871322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.871712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.871743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.872030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.872059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.872414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.872443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.872796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.726 [2024-12-05 14:19:06.872826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.726 qpair failed and we were unable to recover it. 00:29:00.726 [2024-12-05 14:19:06.873184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.873214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.873563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.873593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.873972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.874004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.874365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.874403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.874792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.874824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.875118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.875147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.875491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.875522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.875890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.875920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.876289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.876318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.876676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.876706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.877069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.877098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.877466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.877497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.877747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.877775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.878156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.878185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.878552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.878583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.878938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.878968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.879311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.879340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.879676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.879713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.880064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.880093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.880468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.880499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.880843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.880872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.881222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.881250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.881608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.881638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.882000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.882030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.882389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.882420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.882782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.882812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.883177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.883211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.883573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.883607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.883963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.883994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.884356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.884386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.884653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.884683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.885046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.885077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.885480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.885510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.885851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.885881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.886249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.886278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.886616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.886647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.887011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.887404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.887434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.887802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.887836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.888170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.888200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.888550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.888584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.888949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.888980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.889318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.889348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.889488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.889520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.727 [2024-12-05 14:19:06.889902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.727 [2024-12-05 14:19:06.889933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.727 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.890170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.890202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.890601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.890632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.891006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.891035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.891385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.891415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.891648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.891682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.892133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.892163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.892516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.892548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.892922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.892950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.893327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.893359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.893606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.893636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.893885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.893917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.894160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.894190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.894572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.894610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.894961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.894991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.895356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.895388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.895752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.895784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.896155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.896185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.896549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.896581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.896950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.896981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.897327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.897357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.897698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.897730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.898105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.898134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.898540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.898573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.898937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.898967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.899341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.899373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.899629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.899662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.900032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.900063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.900420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.900450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.900696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.900726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.901070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.901104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.901467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.901500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.901848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.901879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.902244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.902273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.902635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.902666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.903026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.903055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.903501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.903531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.903872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.903903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.904276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.904305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.904672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.904703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.905132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.905162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.905586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.905617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.905973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.906004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.906362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.906391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.906755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.728 [2024-12-05 14:19:06.906787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.728 qpair failed and we were unable to recover it. 00:29:00.728 [2024-12-05 14:19:06.907212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.907241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.907584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.907615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.907993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.908023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.908211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.908240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.908634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.908666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.908972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.909003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.909258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.909289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.909634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.909666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.910002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.910038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.910380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.910409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.910780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.910810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.911169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.911198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.911574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.911606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.912020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.912049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.912405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.912435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.912744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.912775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.913116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.913147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.913522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.913553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.913922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.913951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.914325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.914353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.914723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.914753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.915120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.915150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.915516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.915547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.915908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.915937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.916282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.916312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.916653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.916684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.916977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.917006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.917354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.917383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.917726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.917758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.918120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.918150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.918530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.918560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.918918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.918947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.919320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.919349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.919698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.919728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.920089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.920118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.920491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.920523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.920879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.920908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.921265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.921294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.729 qpair failed and we were unable to recover it. 00:29:00.729 [2024-12-05 14:19:06.921668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.729 [2024-12-05 14:19:06.921698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.922064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.922093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.922471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.922502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.922839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.922869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.923113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.923145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.923585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.923616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.923979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.924010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.924405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.924434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.924828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.924859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.925214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.925244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.925595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.925633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.925982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.926013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.926282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.926310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.926678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.926708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.926954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.926984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.927342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.927372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.927782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.927815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.928161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.928198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.928571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.928601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.928953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.928983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.929349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.929378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.929817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.929847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.930228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.930257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.930618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.930648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.931015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.931045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.931411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.931441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.931886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.932266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.932296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.932583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.932615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.932987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.933016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.933392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.933420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.933796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.933827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.934174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.934204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.934573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.934604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.934828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.934860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.935244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.935273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.935650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.935681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.936041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.936071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.936429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.936472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.936847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.936877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.937138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.937167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.937490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.937522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.937901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.937931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.730 [2024-12-05 14:19:06.938295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.730 [2024-12-05 14:19:06.938324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.730 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.938683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.938713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.939078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.939108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.939475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.939505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.939859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.939888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.940151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.940181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.940512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.940542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.940898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.940937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.941278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.941307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.941678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.941711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.942063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.942092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.942451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.942495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.942863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.942891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.943261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.943290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.943656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.943688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.944036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.944064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.944318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.944347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.944722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.944752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.945117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.945145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.945483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.945514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.945858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.945889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.946146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.946175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.946524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.946554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.946912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.946942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.947384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.947413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.947651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.947684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.948023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.948053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.948404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.948433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.948691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.948721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.949068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.949098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.949446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.949501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.949845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.949874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.950233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.950262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.950625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.950656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.951021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.951050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.951415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.951445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.951805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.951835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.952206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.952235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.952593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.952623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.952855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.952888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.953135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.953167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.953532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.953563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.953901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.953930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.954285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.954314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.731 [2024-12-05 14:19:06.954694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.731 [2024-12-05 14:19:06.954726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.731 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.955068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.955099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.955486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.955518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.955926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.956174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.956203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.956545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.956576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.956955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.956983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.957343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.957372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.957721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.957752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.958112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.958142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.958518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.958550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.958916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.958946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.959356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.959386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.959745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.959775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.960129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.960158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.960398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.960430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.960636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.960667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.961059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.961088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.961446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.961488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.961873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.961904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.962121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.962153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.962556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.962588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.962946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.962975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.963337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.963366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.963696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.963727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.964094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.964123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.964483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.964514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.964873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.964903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.965259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.965289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.965675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.965705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.966068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.966098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.966448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.966493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.966828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.966857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.967200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.967229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.967593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.967624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.967994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.968022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.968385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.968415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.968787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.968819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.969198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.969228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.969443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.969486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.969861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.969891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.970255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.970283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.970588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.970618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.970993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.971029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.732 [2024-12-05 14:19:06.971342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.732 [2024-12-05 14:19:06.971370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.732 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.971720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.971752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.971997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.972026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.972369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.972398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.972746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.972777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.973135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.973164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.973532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.973562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.973920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.973949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.974307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.974335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.974689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.974720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.975057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.975087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.975326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.975357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.975738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.975769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.976198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.976228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.976585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.976615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.976983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.977013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.977272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.977302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.977669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.977701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.978057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.978087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.978465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.978495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.978838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.978867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.979227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.979614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.979645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.980011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.980041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.980400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.980429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.980690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.980723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.981078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.981108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.981481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.981513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.981858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.981887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.982256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.982286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.982627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.982658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.983027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.983056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.983419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.983448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.983864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.983894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.984248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.984277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.984612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.984643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.985005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.985035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.985395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.985424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.733 qpair failed and we were unable to recover it. 00:29:00.733 [2024-12-05 14:19:06.985864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.733 [2024-12-05 14:19:06.985895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.986249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.986283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.986626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.986657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.987023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.987052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.987306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.987336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.987591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.987625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.987992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.988021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.988390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.988419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.988681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.988712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.989132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.989162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.989529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.989906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.989934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.990292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.990322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.990697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.990727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.991075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.991104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.991344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.991374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.991730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.991760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.992124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.992153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.992410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.992440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.992825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.992855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.993219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.993249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.993606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.993638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.993997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.994026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.994376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.994404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.994765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.994795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.995149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.995178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.995532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.995563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.995964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.995993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.996354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.996384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.996746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.996776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.997137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.997166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.997539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.997569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.997819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.997848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.998211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.998242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.998594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.998624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.998913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.998942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.999176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.999208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.999578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.999608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:06.999956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:06.999985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:07.000342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:07.000372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:07.000718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:07.000748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:07.001086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:07.001122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:07.001450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:07.001507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:07.001862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.734 [2024-12-05 14:19:07.001891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.734 qpair failed and we were unable to recover it. 00:29:00.734 [2024-12-05 14:19:07.002252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.735 [2024-12-05 14:19:07.002281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.735 qpair failed and we were unable to recover it. 00:29:00.735 [2024-12-05 14:19:07.002534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.735 [2024-12-05 14:19:07.002566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.735 qpair failed and we were unable to recover it. 00:29:00.735 [2024-12-05 14:19:07.002973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:00.735 [2024-12-05 14:19:07.003002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:00.735 qpair failed and we were unable to recover it. 00:29:01.007 [2024-12-05 14:19:07.003379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.007 [2024-12-05 14:19:07.003411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.007 qpair failed and we were unable to recover it. 00:29:01.007 [2024-12-05 14:19:07.003747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.003778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.004141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.004172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.004538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.004568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.004915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.004944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.007385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.007473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.007961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.007999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.008363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.008393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.008844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.008877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.009239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.009270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.009630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.009661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.010024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.010055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.010416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.010445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.010724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.010755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.011105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.011135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.011481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.011511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.011814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.011843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.012217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.012248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.012602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.012634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.013001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.013033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.013287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.013317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.013662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.013694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.014053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.014082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.014444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.014490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.016922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.016991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.017341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.017705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.017738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.018106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.018136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.008 [2024-12-05 14:19:07.018503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.008 [2024-12-05 14:19:07.018536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.008 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.018913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.019194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.019230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.019619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.019652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.020023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.020053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.020412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.020443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.020872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.020912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.021179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.021213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.021587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.021620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.021971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.022002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.022361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.022393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.022743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.022774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.023002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.023032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.023392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.023422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.023780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.023811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.024172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.024202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.024554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.024585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.024957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.024987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.025355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.025385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.025756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.025789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.026181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.026214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.026575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.026610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.026967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.026998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.027258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.027288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.027638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.027669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.028013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.028044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.028398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.028430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.028848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.028879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.009 [2024-12-05 14:19:07.029224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.009 [2024-12-05 14:19:07.029255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.009 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.029618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.029651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.030007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.030037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.030418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.030448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.030811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.030842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.031147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.031178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.031415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.031445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.033367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.033431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.033917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.033957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.036237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.036308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.036770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.036809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.038616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.038675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.039106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.039143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.039426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.039474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.039722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.039753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.040125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.040156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.040407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.040438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.040810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.040843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.041193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.041224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.041592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.041625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.041998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.042028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.042394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.042425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.042819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.042850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.043206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.043236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.043596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.043632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.045587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.045653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.046085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.046120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.046514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.046552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.046906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.010 [2024-12-05 14:19:07.046938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.010 qpair failed and we were unable to recover it. 00:29:01.010 [2024-12-05 14:19:07.047300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.047333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.047677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.047709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.048067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.048097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.048478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.048511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.048846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.048880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.049236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.049267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.049627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.049658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.050093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.050125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.050484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.050515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.050803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.050833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.051193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.051224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.051479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.051521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.051872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.051905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.052268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.052300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.052638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.052669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.053011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.053041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.053402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.053441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.053871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.053903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.054260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.054290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.054625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.054656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.054997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.055027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.055386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.055415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.055794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.055825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.056187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.056217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.056575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.056606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.056977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.057007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.057257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.057286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.057674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.057706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.058066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.058096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.011 [2024-12-05 14:19:07.058346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.011 [2024-12-05 14:19:07.058376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.011 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.058726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.058758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.059017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.059046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.059397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.059428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.059812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.059842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.060219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.060248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.060626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.060658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.060999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.061029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.061369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.061398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.061789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.061822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.062179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.062208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.062567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.062599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.063053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.063083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.063421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.063450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.063748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.063779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.064148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.064179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.064928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.064957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.065300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.065330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.065678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.065708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.066052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.066082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.066443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.066575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.066894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.066923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.067256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.067285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.067635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.067668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.068008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.068037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.068402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.068431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.068823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.068860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.012 qpair failed and we were unable to recover it. 00:29:01.012 [2024-12-05 14:19:07.069221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.012 [2024-12-05 14:19:07.069251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.069616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.069647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.069901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.069930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.070269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.070299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.070674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.070706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.071047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.071076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.071438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.071481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.071886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.071915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.072274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.072302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.072685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.072717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.073056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.073086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.073450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.073492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.073837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.073866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.074230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.074262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.074628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.074659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.075053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.075082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.075321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.075354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.075711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.075742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.076103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.076134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.076485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.076518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.076874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.076904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.077267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.077295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.077542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.077576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.077945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.077974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.078343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.078372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.078750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.078781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.013 [2024-12-05 14:19:07.079146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.013 [2024-12-05 14:19:07.079175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.013 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.079547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.079578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.079926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.079955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.080214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.080243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.080605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.080637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.080988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.081017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.081379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.081408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.081692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.081723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.082160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.082516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.082546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.082806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.082836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.083192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.083221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.083477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.083508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.083870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.083905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.084249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.084620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.084651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.085015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.085045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.085402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.085431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.085841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.085872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.086118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.086149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.086518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.086549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.086906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.086935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.087190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.087219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.087587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.087617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.087873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.087903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.088243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.014 [2024-12-05 14:19:07.088516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.014 [2024-12-05 14:19:07.088547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.014 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.088916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.088946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.089304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.089334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.089672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.089703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.090044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.090074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.090417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.090446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.090828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.090858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.091214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.091242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.091611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.091641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.091996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.092025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.092397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.092426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.092795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.092828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.093188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.093218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.093577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.093608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.093861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.093890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.094260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.094290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.094638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.094668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.095006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.095036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.095404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.095434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.095801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.095831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.096173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.096203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.096475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.096507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.096833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.096862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.097223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.097253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.097622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.097654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.097877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.097910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.098149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.098182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.098492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.098529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.098952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.015 [2024-12-05 14:19:07.098982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.015 qpair failed and we were unable to recover it. 00:29:01.015 [2024-12-05 14:19:07.099344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.099375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.099616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.099646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.100027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.100057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.100420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.100449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.100783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.100814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.101100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.101130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.101503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.101535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.101888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.101917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.102280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.102310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.102553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.102587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.103025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.103054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.103399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.103427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.103830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.103861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.104229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.104258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.104622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.104652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.105025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.105055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.105425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.105481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.105851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.105881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.106231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.106260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.106624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.106656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.107015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.107045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.107393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.107422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.107780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.107810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.108170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.108200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.108586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.108617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.108993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.109022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.109385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.109414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.109843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.109874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.110111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.016 [2024-12-05 14:19:07.110144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.016 qpair failed and we were unable to recover it. 00:29:01.016 [2024-12-05 14:19:07.110514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.110545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.110914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.110943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.111300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.111330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.111727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.111758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.112122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.112151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.112513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.112543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.112913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.112942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.113308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.113339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.113676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.114060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.114095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.114474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.114506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.114734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.114767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.115112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.115141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.115501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.115533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.115906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.115936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.116304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.116332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.116686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.116716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.117074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.117103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.117362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.117391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.117753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.117784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.118150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.118180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.118531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.118563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.118904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.118933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.119273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.119302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.119571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.119602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.119970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.120001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.120245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.120274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.120613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.120643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.121004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.121035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.121377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.017 [2024-12-05 14:19:07.121406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.017 qpair failed and we were unable to recover it. 00:29:01.017 [2024-12-05 14:19:07.121759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.121789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.122152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.122181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.122436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.122482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.122918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.122948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.123346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.123375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.123719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.123750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.124088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.124117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.124403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.124433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.124849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.124880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.125243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.125272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.125625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.125656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.125948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.125977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.126344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.126375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.126727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.126758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.127116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.127145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.127503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.127535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.127978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.128007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.128367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.128396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.128654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.128685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.129079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.129115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.129479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.129511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.129862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.129892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.130303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.130332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.130595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.130624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.130986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.131015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.131259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.131292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.131630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.131661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.132023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.132052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.018 qpair failed and we were unable to recover it. 00:29:01.018 [2024-12-05 14:19:07.132407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.018 [2024-12-05 14:19:07.132436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.132802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.132832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.133195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.133599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.133630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.133998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.134027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.134386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.134415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.134812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.134842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.135210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.135240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.135603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.135635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.136001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.136030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.136429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.136473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.136726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.137089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.137118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.137474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.137505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.137901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.137931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.138288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.138317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.138794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.138825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.139171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.139200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.139568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.139601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.140014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.140044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.140338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.142309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.142376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.142780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.142819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.144767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.144834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.145274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.145310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.145686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.145719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.148101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.019 qpair failed and we were unable to recover it. 00:29:01.019 [2024-12-05 14:19:07.148539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.019 [2024-12-05 14:19:07.148577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.148965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.148997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.150814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.150874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.151233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.151270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.151636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.151683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.152067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.152095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.152475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.152502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.152817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.152843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.153207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.153232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.153675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.153703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.154050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.154081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.154446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.154484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.154861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.154886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.155225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.155251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.155589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.155619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.155987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.156013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.156380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.156405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.156842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.156874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.157206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.157234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.157483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.157510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.157861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.157887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.158261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.158287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.158733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.159035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.159061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.159434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.159474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.159847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.159873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.160109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.160135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.020 qpair failed and we were unable to recover it. 00:29:01.020 [2024-12-05 14:19:07.160490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.020 [2024-12-05 14:19:07.160519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.160871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.160896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.161261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.161287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.161572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.161599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.161977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.162005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.162369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.162395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.162797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.162825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.163184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.163213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.163370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.163402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.163738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.163773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.164197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.164227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.164588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.164621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.164994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.165023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.165382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.165411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.165720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.165752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.165911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.165943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.166254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.166285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.166646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.166685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.167064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.167093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.167520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.167551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.167890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.167921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.168367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.168397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.168701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.168732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.169081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.169111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.169372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.169401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.169693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.169724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.170082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.170112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.170361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.021 [2024-12-05 14:19:07.170393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.021 qpair failed and we were unable to recover it. 00:29:01.021 [2024-12-05 14:19:07.170720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.170751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.171112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.171145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.171503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.171536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.171964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.171994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.172340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.172370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.172679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.172709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.172972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.173265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.173295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.173559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.173589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.173834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.173864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.174221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.174251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.174614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.174644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.175018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.175047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.175416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.175445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.175842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.175872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.176238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.176269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.176394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.176427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.176793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.176824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.177205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.177234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.177588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.177621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.177993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.178023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.178382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.178411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.178670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.178700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.179059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.179088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.179470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.179503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.179857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.179886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.180253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.180282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.180689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.022 [2024-12-05 14:19:07.180719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.022 qpair failed and we were unable to recover it. 00:29:01.022 [2024-12-05 14:19:07.181071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.181101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.181442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.181492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.181853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.181882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.182137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.182167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.182431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.182471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.182810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.182839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.183278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.183307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.183554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.183587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.183965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.183994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.184270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.184646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.184677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.185099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.185128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.185490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.185521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.185914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.185944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.186194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.186223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.186603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.186634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.187007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.187036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.187389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.187418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.187712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.188094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.188125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.188486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.188517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.188880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.188909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.189272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.189302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.189689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.189719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.190131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.190160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.190417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.190447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.190850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.190880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.191244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.191273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.191623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.191654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.023 [2024-12-05 14:19:07.192000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.023 [2024-12-05 14:19:07.192030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.023 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.192386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.192416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.192700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.192732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.192993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.193026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.193392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.193422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.193856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.193887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.194243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.194271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.194529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.194561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.194949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.194978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.195338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.195368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.195729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.196121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.196150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.196519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.196556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.196817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.196845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.197108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.197137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.197493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.197524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.197859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.197889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.198256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.198286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.198642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.198674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.199028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.199058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.199420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.199449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.199827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.199857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.200224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.200253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.200603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.200633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.200948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.200977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.201340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.201369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.201711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.201742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.202120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.202150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.202508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.024 [2024-12-05 14:19:07.202538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.024 qpair failed and we were unable to recover it. 00:29:01.024 [2024-12-05 14:19:07.202918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.202947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.203325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.203354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.203749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.203779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.204135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.204165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.204530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.204560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.204967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.205325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.205354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.205756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.205787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.206151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.206181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.206587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.207009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.207038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.207391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.207422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.207800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.207831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.208190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.208220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.208499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.208531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.208914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.208943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.209245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.209274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.209611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.209642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.210002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.210404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.210434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.210790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.210822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.211052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.211084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.211443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.211489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.211839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.211874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.212218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.212248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.212614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.212646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.213009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.213037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.213399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.025 [2024-12-05 14:19:07.213428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.025 qpair failed and we were unable to recover it. 00:29:01.025 [2024-12-05 14:19:07.213718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.213752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.214095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.214124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.214505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.214536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.214900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.214929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.215308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.215340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.215715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.215745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.216123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.216153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.216383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.216412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.216763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.216794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.217153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.217182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.217552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.217584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.217951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.217980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.218355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.218384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.218698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.218728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.219086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.219115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.219476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.219506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.219947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.219979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.220330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.220360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.220699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.220729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.220971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.221004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.221261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.221290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.221534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.221566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.221938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.221969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.222213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.222243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.026 [2024-12-05 14:19:07.222612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.026 [2024-12-05 14:19:07.222643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.026 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.223009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.223038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.223404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.223434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.223811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.223841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.224215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.224244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.224582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.224612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.224998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.225029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.225280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.225312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.225685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.225717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.226055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.226083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.226446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.226489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.226861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.226897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.227137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.227170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.227513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.227544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.227876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.227905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.228266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.228294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.228636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.228666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.229035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.229065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.229431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.229483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.229909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.229939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.230244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.230274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.230633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.230664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.231027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.231056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.231420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.231450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.231819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.231849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.232211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.232241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.232589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.232619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.232990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.233019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.233328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.027 [2024-12-05 14:19:07.233358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.027 qpair failed and we were unable to recover it. 00:29:01.027 [2024-12-05 14:19:07.233697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.233727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.234092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.234122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.234366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.234395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.234746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.234777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.235011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.235043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.235392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.235421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.235863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.235894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.236248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.236278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.236635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.236667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.237030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.237061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.237421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.237450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.237813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.237842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.238212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.238241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.238599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.238630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.238986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.239016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.239391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.239422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.239760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.239791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.240148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.240538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.240570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.240937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.240966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.241325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.241354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.241704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.241733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.242094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.242124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.242359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.242388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.242829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.242860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.243203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.243233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.243605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.243636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.243990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.244020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.244271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.028 [2024-12-05 14:19:07.244301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.028 qpair failed and we were unable to recover it. 00:29:01.028 [2024-12-05 14:19:07.244663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.244694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.244951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.244980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.245336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.245367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.245731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.245761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.246087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.246117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.246363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.246396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.246791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.246821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.247179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.247209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.247586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.247620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.247986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.248016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.248265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.248298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.248713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.248744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.248982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.249012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.249366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.249397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.249753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.249785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.250152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.250181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.250540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.250571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.250951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.250981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.251335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.251365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.251702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.251733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.252160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.252195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.252436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.252484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.252854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.252884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.253245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.253274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.253633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.253663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.254097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.254126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.254470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.254501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.254843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.029 [2024-12-05 14:19:07.254872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.029 qpair failed and we were unable to recover it. 00:29:01.029 [2024-12-05 14:19:07.255231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.255260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.255624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.255656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.256009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.256038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.256377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.256406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.256828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.256861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.257221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.257251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.257605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.257636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.257995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.258024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.258388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.258417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.258826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.258857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.259221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.259251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.259504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.259538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.259811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.259840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.260201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.260230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.260594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.260625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.260986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.261017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.261371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.261401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.261758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.261789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.262147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.262177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.262542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.262573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.262925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.262954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.263314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.263344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.263615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.263645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.264006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.264036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.264410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.264439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.264806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.264835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.265191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.265220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.265580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.265610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.030 [2024-12-05 14:19:07.265958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.030 [2024-12-05 14:19:07.265988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.030 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.266246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.266280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.266534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.266568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.266964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.266994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.267366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.267401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.267757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.267787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.268150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.268181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.268540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.268573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.268953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.268983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.269332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.269361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.269696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.269727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.270086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.270114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.270476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.270507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.270877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.270906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.271277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.271307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.271692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.271722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.272084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.272112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.272258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.272290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.272716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.272748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.273092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.273121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.273484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.273515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.273791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.273824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.274234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.274263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.274624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.274655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.275019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.275048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.275423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.275453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.275837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.275867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.276122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.276155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.276439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.276490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.031 [2024-12-05 14:19:07.276895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.031 [2024-12-05 14:19:07.276924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.031 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.277301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.277331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.277677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.277708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.278070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.278101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.278447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.278489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.278866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.278894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.279255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.279286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.279656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.279688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.280050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.280079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.280304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.280338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.280678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.280709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.281092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.281122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.281484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.281515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.281867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.281898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.282260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.282289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.282662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.282703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.283039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.283068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.283431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.283473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.283853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.283882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.284148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.284177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.284528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.284559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.284904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.284933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.285302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.285331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.285763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.285794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.286147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.286176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.032 [2024-12-05 14:19:07.286442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.032 [2024-12-05 14:19:07.286484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.032 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.286857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.286887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.287252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.287281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.287634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.287664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.288029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.288058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.288420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.288449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.288827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.288857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.289219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.289248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.289581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.289612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.289991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.290021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.290356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.290386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.290747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.290778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.291135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.291164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.291425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.291469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.291889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.291918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.292274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.292303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.292679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.292709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.293075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.293105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.293481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.293513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.033 [2024-12-05 14:19:07.293801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.033 [2024-12-05 14:19:07.293830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.033 qpair failed and we were unable to recover it. 00:29:01.314 [2024-12-05 14:19:07.294197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.314 [2024-12-05 14:19:07.294228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.314 qpair failed and we were unable to recover it. 00:29:01.314 [2024-12-05 14:19:07.294590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.314 [2024-12-05 14:19:07.294622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.314 qpair failed and we were unable to recover it. 00:29:01.314 [2024-12-05 14:19:07.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.314 [2024-12-05 14:19:07.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.295375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.295405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.295766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.295797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.296154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.296183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.296553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.296583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.296951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.296982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.297343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.297373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.297761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.297791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.298024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.298071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.298304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.298336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.298785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.298816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.299174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.299203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.299620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.299651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.300026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.300289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.300319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.300677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.300708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.301079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.301108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.301339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.301371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.301716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.301746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.302105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.302134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.302510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.302541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.302905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.302934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.303274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.303304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.303664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.303694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.304046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.304075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.304441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.304483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.304738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.304767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.305114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.305143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.315 [2024-12-05 14:19:07.305513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.315 [2024-12-05 14:19:07.305544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.315 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.306001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.306363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.306392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.306756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.306787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.307154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.307183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.307545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.307576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.307974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.308002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.308362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.308392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.308762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.308793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.309146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.309175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.309439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.309479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.309861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.309890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.310255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.310283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.310665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.310697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.311036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.311065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.311415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.311444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.311805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.311834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.312204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.312232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.312593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.312624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.312995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.313025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.313405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.313441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.313744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.313774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.313997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.314029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.314441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.314498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.314892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.314923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.315291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.315320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.315682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.315714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.316085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.316114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.316481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.316511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.316870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.316899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.316 qpair failed and we were unable to recover it. 00:29:01.316 [2024-12-05 14:19:07.317200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.316 [2024-12-05 14:19:07.317229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.317571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.317601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.317863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.317893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.318245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.318273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.320214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.320277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.320717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.320755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.321117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.321147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.321490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.321521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.321906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.321936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.322298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.322327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.322628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.322660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.323027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.323056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.323417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.323826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.323858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.324212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.324242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.324604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.324637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.324882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.324916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.325278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.325308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.325668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.325699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.326038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.326067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.326472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.326503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.326863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.326892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.327209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.327239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.327610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.327641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.327998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.328029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.328389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.328419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.328817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.328847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.329212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.329242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.329574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.329606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.329864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.317 [2024-12-05 14:19:07.329894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.317 qpair failed and we were unable to recover it. 00:29:01.317 [2024-12-05 14:19:07.330285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.330321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.330749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.330780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.331212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.331242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.331584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.331616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.331964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.331996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.332346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.332376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.332737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.332767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.333123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.333154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.333519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.333551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.333917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.333946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.334311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.334341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.334699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.334730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.335110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.335140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.335519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.335552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.335939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.336285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.336315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.336681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.336711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.337087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.337117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.337551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.337582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.337941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.337971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.338330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.338359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.338623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.338657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.339023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.339053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.339418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.339447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.339817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.339847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.340206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.340236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.340596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.340627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.340996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.341026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.341366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.341396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.341738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.341771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.318 qpair failed and we were unable to recover it. 00:29:01.318 [2024-12-05 14:19:07.342071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.318 [2024-12-05 14:19:07.342100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.342474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.342505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.342857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.342886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.343249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.343279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.343638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.343670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.344040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.344069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.344408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.344438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.344800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.344830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.345186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.345216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.345580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.345610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.345966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.346002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.346366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.346395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.346759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.346790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.347186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.347560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.347591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.347970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.347999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.348357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.348388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.348750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.348782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.349133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.349162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.349543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.349574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.349813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.349848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.350113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.350146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.350418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.350453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.350831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.351230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.351260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.351600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.351632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.351889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.351922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.352278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.352307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.319 [2024-12-05 14:19:07.352671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.319 [2024-12-05 14:19:07.352703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.319 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.353142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.353172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.353526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.353557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.353791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.353824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.354096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.354125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.354476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.354507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.354848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.354877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.355238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.355617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.355650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.356014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.356045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.356406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.356435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.356835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.356865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.357125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.357155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.357545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.357576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.357939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.357969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.358330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.358359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.358738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.358769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.359128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.359156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.359419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.359447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.359737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.359768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.360130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.360160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.360525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.360556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.360791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.360830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.361191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.320 [2024-12-05 14:19:07.361221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.320 qpair failed and we were unable to recover it. 00:29:01.320 [2024-12-05 14:19:07.361577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.361608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.361974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.362003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.362372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.362403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.362828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.362860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.363096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.363128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.363500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.363531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.363940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.363971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.364217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.364246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.364613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.364644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.365015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.365048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.365396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.365426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.365789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.365819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.366179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.366209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.366568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.366599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.366956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.366985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.367360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.367390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.367761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.367792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.368018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.368048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.368404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.368433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.368835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.368865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.369269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.369299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.369650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.369681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.370050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.370079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.370442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.370495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.370863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.370891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.371251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.371282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.371650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.371682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.372041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.372071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.372303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.372334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.372740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.321 [2024-12-05 14:19:07.372770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.321 qpair failed and we were unable to recover it. 00:29:01.321 [2024-12-05 14:19:07.373132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.373161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.373540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.373570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.373958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.373987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.374350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.374380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.374741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.374772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.375126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.375154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.375523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.375554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.375903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.375932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.376308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.376344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.376681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.376712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.377131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.377161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.377477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.377508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.377865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.377895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.378334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.378363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.378697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.378730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.379091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.379121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.379482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.379513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.379765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.379795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.380053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.380082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.380330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.380361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.380753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.380784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.381135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.381164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.381407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.381437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.381795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.381825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.382158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.382187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.382583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.382948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.382977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.383387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.383416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.383809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.322 [2024-12-05 14:19:07.383840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.322 qpair failed and we were unable to recover it. 00:29:01.322 [2024-12-05 14:19:07.384199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.384228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.384576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.384606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.384966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.384996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.385367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.385395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.385756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.385786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.386166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.386197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.386561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.386592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.386977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.387006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.387357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.387386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.387746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.387777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.388077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.388107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.388476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.388507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.388860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.388890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.389250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.389638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.389669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.390035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.390064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.390336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.390367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.390648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.390679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.391066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.391095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.391476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.391513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.391855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.391884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.392256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.392285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.392654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.392685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.393009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.393039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.393391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.393421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.393781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.393812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.394167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.394197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.394491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.394523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.394775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.323 [2024-12-05 14:19:07.394805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.323 qpair failed and we were unable to recover it. 00:29:01.323 [2024-12-05 14:19:07.395196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.395225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.395569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.395936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.395966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.396326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.396354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.396698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.396728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.397080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.397110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.397450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.397492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.397752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.397785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.398182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.398212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.398556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.398588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.398968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.398996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.399359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.399388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.399661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.399693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.400142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.400171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.400525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.400555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.400926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.400956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.401316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.401345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.401707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.401738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.401997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.402030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.402391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.402421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.402774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.402805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.403164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.403193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.403558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.403589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.403832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.403865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.404218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.404249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.404604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.404635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.405083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.405113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.405452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.405493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.405863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.324 [2024-12-05 14:19:07.406253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.324 [2024-12-05 14:19:07.406281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.324 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.406631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.406669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.407048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.407078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.407440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.407498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.407927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.407956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.408321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.408350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.408688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.409059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.409087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.409450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.409492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.409837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.409865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.410222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.410250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.410588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.410618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.410997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.411026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.411394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.411424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.411783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.411813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.412179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.412209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.412573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.412604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.412962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.412992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.413359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.413388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.413797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.413829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.414174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.414204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.414576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.414606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.414959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.414988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.415355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.415384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.415697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.415728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.416087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.416116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.416482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.416513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.416860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.416889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.417264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.417294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.417681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.417712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.418070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.418098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.418468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.418499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.418756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.418788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.419161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.419190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.419570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.419601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.419950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.419979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.420318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.420348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.420718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.325 [2024-12-05 14:19:07.420752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.325 qpair failed and we were unable to recover it. 00:29:01.325 [2024-12-05 14:19:07.421093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.421122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.421554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.421585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.421814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.421847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.422207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.422243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.422603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.422634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.422994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.423025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.423394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.423423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.423799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.423830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.424202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.424232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.424586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.424618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.424962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.424991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.425358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.425389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.425732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.425763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.426130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.426159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.426522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.426552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.426985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.427014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.427364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.427394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.427817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.427848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.428109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.428142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.428512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.428544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.428901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.428930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.429286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.429315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.429684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.429715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.430082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.430113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.430481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.430513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.430860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.430890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.431241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.431271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.431647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.431677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.432047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.432076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.432339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.432370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.432821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.432853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.433095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.433124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.433494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.326 [2024-12-05 14:19:07.433526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.326 qpair failed and we were unable to recover it. 00:29:01.326 [2024-12-05 14:19:07.433902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.433931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.434353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.434382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.434749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.434780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.435030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.435062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.435417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.435448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.435817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.435847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.436211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.436241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.436608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.436640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.436999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.437028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.437269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.437302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.437680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.437711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.438082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.438112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.438475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.438506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.438750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.438779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.439130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.439160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.439524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.439555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.439972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.440001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.440374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.440403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.440818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.440849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.441008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.441040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.441429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.441481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.441884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.441914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.442260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.442289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.442660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.442690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.443066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.443095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.443482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.443514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.443858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.443887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.444248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.444276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.444705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.444735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.445082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.445112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.445546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.445578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.445934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.445964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.446228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.446259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.446640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.446672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.447021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.447051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.447418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.447447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.447810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.447840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.327 [2024-12-05 14:19:07.448200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.327 [2024-12-05 14:19:07.448236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.327 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.448598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.448630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.449001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.449032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.449384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.449413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.449810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.449839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.450205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.450234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.450597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.451003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.451033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.451402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.451432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.451787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.451818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.452168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.452198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.452598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.452630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.453058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.453088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.453430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.453471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.453912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.453941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.454298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.454327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.454591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.454625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.454987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.455017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.455385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.455414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.455783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.455815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.456182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.456210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.456573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.456604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.456957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.456986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.457350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.457381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.457737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.457768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.457997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.458028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.458300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.458330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.458704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.458736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.458982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.459011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.459372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.459403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.459823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.459854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.460233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.460263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.460623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.460654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.460990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.461020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.461390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.461419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.461781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.461814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.462154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.462183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.462544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.462575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.328 [2024-12-05 14:19:07.462948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.328 [2024-12-05 14:19:07.462978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.328 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.463319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.463348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.463683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.463721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.464077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.464107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.464470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.464501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.464853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.464881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.465244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.465274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.465532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.465562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.465859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.466226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.466256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.466597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.466628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.466895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.466924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.467307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.467336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.467686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.467716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.468082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.468113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.468467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.468500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.468849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.468879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.469251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.469280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.469628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.469658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.470021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.470051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.470484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.470516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.470861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.470890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.471121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.471153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.471499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.471530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.471904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.471933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.472290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.472319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.472679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.472710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.473071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.473101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.473473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.473504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.473939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.473968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.474326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.474356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.474717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.474749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.475104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.475133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.475525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.475921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.475951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.476308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.476338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.476779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.476809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.477171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.477200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.329 qpair failed and we were unable to recover it. 00:29:01.329 [2024-12-05 14:19:07.477578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.329 [2024-12-05 14:19:07.477609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.477968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.477999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.478296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.478325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.478684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.478716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.479082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.479118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.479493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.479524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.479882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.479912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.480262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.480292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.480660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.480691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.481063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.481093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.481443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.481485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.481856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.481885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.482200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.482230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.482587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.482619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.482989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.483018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.483399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.483428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.483806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.483836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.484206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.484234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.484567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.484597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.485019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.485049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.485409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.485438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.485805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.485836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.486190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.486219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.486579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.486611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.486962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.486990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.487352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.487382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.487800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.487831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.488180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.488210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.488570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.488601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.488970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.488999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.489360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.489391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.489760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.489791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.490148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.490177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.490545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.330 [2024-12-05 14:19:07.490576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.330 qpair failed and we were unable to recover it. 00:29:01.330 [2024-12-05 14:19:07.490814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.490845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.491211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.491240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.491581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.491613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.491985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.492014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.492377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.492407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.492793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.492824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.493061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.493090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.493332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.493715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.493746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.494105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.494135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.494536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.494574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.494919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.494949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.495315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.495343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.495697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.495728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.496082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.496112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.496517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.496549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.496804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.496834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.497127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.497478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.497508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.497865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.497895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.498261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.498290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.498559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.498976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.499006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.499355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.499384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.499811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.499842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.500212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.500243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.500609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.500641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.500951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.500982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2915502 Killed "${NVMF_APP[@]}" "$@" 00:29:01.331 [2024-12-05 14:19:07.501323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.501352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.501733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.501763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.502123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:01.331 [2024-12-05 14:19:07.502153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:01.331 [2024-12-05 14:19:07.502551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.331 [2024-12-05 14:19:07.502905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.502935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.331 [2024-12-05 14:19:07.503299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.503329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.503579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.503620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.331 [2024-12-05 14:19:07.503978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.331 [2024-12-05 14:19:07.504009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.331 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.504395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.504424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.504773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.504804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.505142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.505171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.505524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.505554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.505919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.505950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.506199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.506228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.506588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.506620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.506970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.507000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.507362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.507392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.507614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.507648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.508039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.508070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.508415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.508446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.508820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.508851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.509104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.509137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.509528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.509559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.509915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.509945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.510310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.510341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.510486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.510520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.510890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.510920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.511301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.511331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2916477 00:29:01.332 [2024-12-05 14:19:07.511702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.511732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2916477 00:29:01.332 [2024-12-05 14:19:07.512095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.512125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2916477 ']' 00:29:01.332 [2024-12-05 14:19:07.512372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.512405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.332 [2024-12-05 14:19:07.512825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.512857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.332 [2024-12-05 14:19:07.513204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.513236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.332 14:19:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.332 [2024-12-05 14:19:07.513587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.513619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.513972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.514002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.514369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.514401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.514810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.514843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.515032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.515064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.515329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.515360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.515664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.515696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.515906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.332 [2024-12-05 14:19:07.515939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.332 qpair failed and we were unable to recover it. 00:29:01.332 [2024-12-05 14:19:07.516299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.516330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.516645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.516676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.517040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.517072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.517429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.517469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.517731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.517762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.518002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.518034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.518280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.518315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.518654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.518689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.519038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.519069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.519319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.519349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.519717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.519750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.520111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.520143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.520517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.520548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.520788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.520819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.521183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.521214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.521539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.521570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.521816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.521847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.522209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.522240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.522615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.522648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.523032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.523063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.523420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.523450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.523843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.523874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.524265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.524624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.524656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.525031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.525061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.525438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.525482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.525846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.525876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.526237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.526272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.526729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.526760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.527114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.527146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.527292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.527322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.527695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.527728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.528078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.528108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.528480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.528512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.528757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.528787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.529151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.529183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.529469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.529502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.333 [2024-12-05 14:19:07.529881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.333 [2024-12-05 14:19:07.529912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.333 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.530275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.530305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.530562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.530595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.530952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.530982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.531339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.531370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.531764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.531795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.532034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.532065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.532416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.532447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.532857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.532889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.533253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.533283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.533635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.533665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.534032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.534062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.534479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.534512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.534874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.534904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.535281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.535311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.535705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.535736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.536102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.536132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.536497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.536544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.536912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.536943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.537305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.537335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.537718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.537749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.538012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.538042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.538397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.538426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.538811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.538842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.539203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.539232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.539502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.539533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.539923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.539953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.540313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.540343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.540694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.540725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.541038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.541072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.541400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.541438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.541735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.541766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.542129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.542161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.542417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.542446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.542690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.542721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.543116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.543146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.543478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.543511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.543772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.334 [2024-12-05 14:19:07.543802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.334 qpair failed and we were unable to recover it. 00:29:01.334 [2024-12-05 14:19:07.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.544214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.544580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.544612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.544864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.544893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.545257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.545547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.545579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.545942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.545971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.546346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.546377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.548343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.548407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.548922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.549296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.549327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.549692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.549724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.550090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.550119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.550366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.550401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.550855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.551108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.551141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.551505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.551536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.551919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.551948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.552298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.552328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.552686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.552717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.553061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.553091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.553467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.553499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.553883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.553913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.554277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.554306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.554699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.554731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.555075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.555104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.555356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.555385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.555827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.555858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.556278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.556307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.556576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.556611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.556996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.557376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.557405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.557754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.557785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.558090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.558126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.558482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.335 [2024-12-05 14:19:07.558514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.335 qpair failed and we were unable to recover it. 00:29:01.335 [2024-12-05 14:19:07.558973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.559002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.559365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.559394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.559761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.559792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.560138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.560167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.560536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.560566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.560920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.560949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.561291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.561320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.561573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.561607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.561988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.562019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.562273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.562303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.562743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.562774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.563194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.563223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.563588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.563622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.563981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.564010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.564393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.564422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.564849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.564881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.565241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.565271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.565527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.565558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.565905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.565934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.566298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.566328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.566685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.567081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.567110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.567469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.567501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.567882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.567911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.568285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.568314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.568595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.568626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.568995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.569024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.569390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.569420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.569599] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:29:01.336 [2024-12-05 14:19:07.569657] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.336 [2024-12-05 14:19:07.569812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.569842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.570199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.570227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.570589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.570620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.570997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.571026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.571394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.571424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.571817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.571847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.572218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.572248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.572534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.572564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.336 [2024-12-05 14:19:07.572829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.336 [2024-12-05 14:19:07.572858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.336 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.573247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.573277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.573637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.573669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.574036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.574068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.574410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.574441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.574823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.574853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.575092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.575121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.575493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.575525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.575898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.575927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.576291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.576322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.576543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.576575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.576953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.576983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.577387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.577417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.577682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.577717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.577959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.577995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.578353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.578384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.578757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.578789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.579153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.579183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.579533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.579565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.579931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.579961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.580258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.580289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.580650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.580682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.581039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.581070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.581443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.581487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.581862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.581892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.582113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.582144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.582521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.582552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.582921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.582951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.583340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.583371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.583723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.583754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.584114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.584145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.584507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.584539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.584915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.584944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.585282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.585312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.585688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.585720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.586091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.586120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.586379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.586409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.586707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.586743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.337 [2024-12-05 14:19:07.587100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.337 [2024-12-05 14:19:07.587130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.337 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.587481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.587513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.587865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.587895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.588234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.588263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.588620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.588652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.589012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.589042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.589289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.589322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.589692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.589723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.590089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.590119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.590481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.590512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.590730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.590760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.591068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.591097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.591486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.591517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.591859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.591893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.592128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.592159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.592524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.592554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.592914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.592950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.593313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.338 [2024-12-05 14:19:07.593343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.338 qpair failed and we were unable to recover it. 00:29:01.338 [2024-12-05 14:19:07.593735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.593767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.594065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.594096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.594470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.594503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.594739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.594768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.595159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.595188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.595560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.595593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.595845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.595876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.596143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.596175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.596617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.596648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.596990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.597019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.597405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.597435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.597869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.597900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.598268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.598528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.598559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.598907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.598935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.599157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.599187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.599542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.599573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.599951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.599980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.600346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.600375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.600776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.600807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.601090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.601119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.601391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.601420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.601711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.601742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.601994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.602027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.602430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.602471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.602820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.602857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.603221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.603250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.603636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.603667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.604047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.604076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.604442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.604484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.604757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.604786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.605153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.605182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.605561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.605593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.605985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.606015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.606391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.606421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.613 qpair failed and we were unable to recover it. 00:29:01.613 [2024-12-05 14:19:07.606820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.613 [2024-12-05 14:19:07.606851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.607194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.607223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.607592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.607625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.607992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.608021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.608338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.608368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.608719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.608752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.609141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.609170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.609512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.609545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.609857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.609887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.610267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.610297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.610519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.610551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.611001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.611031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.611410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.611439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.611936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.611965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.612373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.612403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.612707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.612739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.613105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.613134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.613500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.613532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.613836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.613865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.614241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.614270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.614480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.614511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.614787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.614816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.615231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.615261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.615500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.615533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.615884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.615914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.616257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.616287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.616666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.616697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.617066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.617095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.617470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.617504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.617850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.617882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.618130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.618169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.618492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.618523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.618938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.618967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.619318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.619347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.619702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.619733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.620106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.620137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.620509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.620541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.620935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.620967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.614 qpair failed and we were unable to recover it. 00:29:01.614 [2024-12-05 14:19:07.621305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.614 [2024-12-05 14:19:07.621334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.621715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.621746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.622114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.622143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.622521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.622553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.622932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.622962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.623326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.623356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.623729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.623760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.624148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.624179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.624493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.624527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.624850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.624880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.625227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.625258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.625500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.625530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.625895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.625926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.626262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.626291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.626682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.626715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.627089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.627122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.627503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.627534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.627916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.627946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.628310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.628341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.628729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.628762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.629101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.629130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.629495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.629526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.629898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.629928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.630294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.630326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.632240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.632303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.632614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.632652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.633016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.633047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.633391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.633420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.633845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.633878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.634242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.634274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.634622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.634654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.635020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.635049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.635420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.635471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.635886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.635917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.636269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.636300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.636654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.636687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.637045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.637075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.637439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.615 [2024-12-05 14:19:07.637508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.615 qpair failed and we were unable to recover it. 00:29:01.615 [2024-12-05 14:19:07.637690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.637738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.638138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.638188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.638562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.638620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.639031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.639089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.639494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.639545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.639939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.639987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.640391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.640423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.640815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.640849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.641093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.641127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.641490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.641522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.641853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.641883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.642242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.642272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.642542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.642575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.642942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.642972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.643300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.643330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.643714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.643745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.644102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.644132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.644498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.644530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.644868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.644900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.645231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.645261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.645645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.645676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.646053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.646084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.646338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.646370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.646719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.646752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.647110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.647140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.647382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.647411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.647739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.647770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.648017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.648049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.648410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.648440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.648828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.648859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.649240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.649270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.649631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.649662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.650029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.650060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.650402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.650432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.650843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.650884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.651226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.651257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.651529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.651566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.651861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.616 [2024-12-05 14:19:07.651891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.616 qpair failed and we were unable to recover it. 00:29:01.616 [2024-12-05 14:19:07.652121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.652152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.652510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.652542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.652924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.652954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.653322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.653352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.653772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.654167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.654197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.654535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.654566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.654957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.654987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.655351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.655382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.655630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.655662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.655926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.655959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.656339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.656369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.656621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.656654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.657027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.657057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.657298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.657328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.657611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.657644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.658023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.658054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.658429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.658481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.658741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.658771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.659111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.659141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.659362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.659392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.659736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.659770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.660142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.660173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.660385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.660414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.660850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.660882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.661166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.661536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.661567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.661942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.661971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.662342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.662372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.662771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.662803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.663164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.663194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.663534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.663565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.663927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.617 [2024-12-05 14:19:07.663956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.617 qpair failed and we were unable to recover it. 00:29:01.617 [2024-12-05 14:19:07.664183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.664216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.664551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.664582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.664956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.664985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.665359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.665396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.665769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.665802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.666184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.666215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.666585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.666617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.666985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.667014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.667376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.667406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.667799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.667831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.668192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.668222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.668575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.668607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.669018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.669048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.669297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.669332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.669575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.669609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.669858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.669889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.670237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.670269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 [2024-12-05 14:19:07.670290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.670533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.670565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.670949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.670981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.671351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.671380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.671757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.671789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.672127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.672157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.672522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.672554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.672899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.672930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.673294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.673325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.673689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.673721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.674084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.674482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.674515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.674671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.674705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.675051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.675082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.675474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.675507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.675846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.675876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.676239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.676269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.676529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.676564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.676942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.676974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.677341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.677370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.677783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.677814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.618 qpair failed and we were unable to recover it. 00:29:01.618 [2024-12-05 14:19:07.678065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.618 [2024-12-05 14:19:07.678095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.678433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.678472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.678819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.678848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.679218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.679249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.679601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.679631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.679995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.680025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.680395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.680425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.680817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.680847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.681209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.681239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.681605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.681636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.681924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.681952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.682203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.682232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.682602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.682632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.682948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.682977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.683355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.683385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.683774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.683806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.684140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.684169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.684527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.684557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.684909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.684938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.685304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.685340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.685763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.685793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.686151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.686179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.686537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.686569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.686939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.686968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.687222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.687250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.687617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.687647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.688016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.688045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.688419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.688447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.688798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.688827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.689162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.689191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.689555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.689585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.689744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.689776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.690018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.690051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.690433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.690476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.690811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.690841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.691195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.691225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.691579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.691611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.691999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.692029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.619 [2024-12-05 14:19:07.692410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.619 [2024-12-05 14:19:07.692439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.619 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.692796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.692827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.693187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.693218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.693576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.693607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.693832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.693866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.694223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.694253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.694621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.694652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.695077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.695107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.695495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.695527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.695877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.695907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.696270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.696299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.696663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.696694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.696930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.696959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.697221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.697250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.697607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.697638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.698009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.698038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.698399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.698429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.698828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.698858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.699163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.699193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.699480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.699511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.699845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.699874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.700227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.700265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.700645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.700677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.700984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.701012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.701302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.701332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.701705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.701737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.702097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.702126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.702487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.702518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.702862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.702892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.703122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.703154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.703510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.703540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.703928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.703957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.704339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.704368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.704746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.704776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.705032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.705062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.705425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.705468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.705705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.705733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.706098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.706127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.620 [2024-12-05 14:19:07.706555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.620 [2024-12-05 14:19:07.706586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.620 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.706822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.706850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.707210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.707240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.707602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.707633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.707813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.707841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.708198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.708227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.708572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.708601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.708975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.709005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.709248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.709281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.709638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.709669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.709955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.709986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.710336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.710365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.710593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.710624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.710974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.711004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.711352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.711383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.711740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.711772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.712133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.712163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.712510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.712541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.712971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.713000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.713364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.713394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.713807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.713837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.714086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.714370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.714399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.714790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.714827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.715055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.715084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.715522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.715553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.715896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.715925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.716292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.716321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.716682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.716712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.716967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.716997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.717360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.717389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.717591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.717623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.717981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.718011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.718261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.718290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.718686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.718718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.719086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.719118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.719377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.719406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.719786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.719818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.621 qpair failed and we were unable to recover it. 00:29:01.621 [2024-12-05 14:19:07.720066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.621 [2024-12-05 14:19:07.720095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.720465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.720497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.720853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.720883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.721260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.721289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.721668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.721700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.722107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.722137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.722280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.622 [2024-12-05 14:19:07.722322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.622 [2024-12-05 14:19:07.722332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.622 [2024-12-05 14:19:07.722339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.622 [2024-12-05 14:19:07.722345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.622 [2024-12-05 14:19:07.722486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.722517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.722762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.722797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.723039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.723068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.723414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.723444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.723823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.723853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.724094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.724124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.724493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.724524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.724535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:01.622 [2024-12-05 14:19:07.724673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:01.622 [2024-12-05 14:19:07.724775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:01.622 [2024-12-05 14:19:07.724775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:01.622 [2024-12-05 14:19:07.724926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.724958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.725328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.725366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.725754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.725785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.725968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.725997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.726369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.726398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.726761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.726792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.727063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.727093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.727442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.727483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.727645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.727674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.727942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.727977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.728227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.728257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.728647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.728677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.729037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.729066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.729453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.729495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.729738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.729766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.730150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.730179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.730544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.730574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.730847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.622 [2024-12-05 14:19:07.730876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.622 qpair failed and we were unable to recover it. 00:29:01.622 [2024-12-05 14:19:07.731286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.731316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.731577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.731607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.731990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.732019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.732381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.732411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.732670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.732704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.733092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.733123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.733466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.733497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.733791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.733821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.734188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.734217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.734492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.734523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.734943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.734973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.735331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.735360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.735576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.735606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.735971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.736000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.736367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.736395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.736638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.736668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.737029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.737058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.737429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.737468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.737744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.737773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.738132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.738162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.738444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.738486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.738629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.738658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.739000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.739029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.739277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.739305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.739595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.739625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.739972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.740001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.740371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.740399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.740769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.740800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.741027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.741060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.741419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.741449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.741822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.741853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.742200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.742235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.742490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.742521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.742894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.742923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.743297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.743326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.743696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.743727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.744091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.744122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.744500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.623 [2024-12-05 14:19:07.744531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.623 qpair failed and we were unable to recover it. 00:29:01.623 [2024-12-05 14:19:07.744790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.744819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.745045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.745074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.745305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.745342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.745619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.745649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.746000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.746029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.746403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.746432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.746838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.747093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.747125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.747547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.747578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.747955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.747984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.748345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.748375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.748752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.748782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.749132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.749162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.749538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.749569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.749828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.749857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.750221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.750251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.750541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.750572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.750937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.750967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.751343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.751373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.751750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.751781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.752178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.752208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.752575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.752605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.752985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.753014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.753382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.753412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.753835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.753866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.754222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.754251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.754361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.754392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.754756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.754789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.755156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.755186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.755519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.755552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.755949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.756327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.756357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.756735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.756767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.757119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.757159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.757377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.757409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.757783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.757814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.758078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.758110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.758345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.624 [2024-12-05 14:19:07.758375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.624 qpair failed and we were unable to recover it. 00:29:01.624 [2024-12-05 14:19:07.758628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.758659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.758997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.759028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.759385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.759415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.759829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.759860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.760218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.760248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.760495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.760526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.760907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.760937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.761199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.761231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.761447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.761489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.761838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.761868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.762093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.762122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.762489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.762520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.762921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.762950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.763201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.763230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.763574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.763605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.763985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.764013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.764393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.764422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.764806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.764838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.765052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.765082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.765435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.765498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.765844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.765874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.766231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.766260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.766502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.766533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.766904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.766933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.767201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.767230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.767580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.767611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.767991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.768022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.768367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.768396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.768762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.768793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.769140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.769169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.769520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.769550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.769904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.769933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.770305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.770333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.770718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.770749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.771112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.771141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.771515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.771552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.771931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.771960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.772333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.625 [2024-12-05 14:19:07.772362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.625 qpair failed and we were unable to recover it. 00:29:01.625 [2024-12-05 14:19:07.772789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.772819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.773174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.773204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.773564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.773595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.774001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.774031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.774267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.774299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.774539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.774571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.774939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.774969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.775343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.775373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.775716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.775747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.776113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.776142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.776520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.776550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.776818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.776852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.777228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.777257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.777636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.777667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.778039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.778069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.778290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.778319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.778670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.778702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.778917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.778949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.779217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.779250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.779481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.779513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.779973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.780003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.780345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.780374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.780589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.780619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.780993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.781023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.781257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.781287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.781678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.781708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.781959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.781992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.782235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.782266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.782486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.782517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.782748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.782777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.783119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.783149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.783511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.783542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.783799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.783828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.626 [2024-12-05 14:19:07.784205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.626 [2024-12-05 14:19:07.784235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.626 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.784534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.784854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.784883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.785131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.785161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.785516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.785552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.785927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.785956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.786324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.786354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.786701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.786732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.787151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.787180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.787453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.787496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.787902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.787931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.788197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.788227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.788577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.788607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.788973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.789002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.789371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.789399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.789768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.789798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.789962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.789994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.790353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.790382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.790615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.790649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.791047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.791076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.791194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.791223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.791470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.791503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.791844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.791874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.792096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.792125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.792488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.792519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.792855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.792884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.793255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.793284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.793631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.793662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.794023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.794051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.794427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.794466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.794829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.794859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.795223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.795253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.795495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.795526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.795886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.795914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.796268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.796299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.796681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.796711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.796950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.796979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.797353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.797382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.627 [2024-12-05 14:19:07.797746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.627 [2024-12-05 14:19:07.797776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.627 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.798154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.798183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.798571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.798601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.798950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.798978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.799338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.799367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.799664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.799695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.799940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.799975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.800325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.800353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.800714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.800745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.801109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.801138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.801363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.801393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.801757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.801787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.802157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.802186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.802410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.802439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.802712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.802745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.803099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.803128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.803363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.803392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.803603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.803634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.803923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.803952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.804209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.804238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.804597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.804628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.804992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.805020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.805299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.805328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.805653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.805685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.805911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.805943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.806271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.806301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.806697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.806727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.807115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.807144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.807491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.807522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.807760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.807788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.808026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.808055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.808307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.808336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.808696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.808726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.808979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.809008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.809388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.809418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.809775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.809806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.810209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.810237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.810601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.810631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.628 [2024-12-05 14:19:07.810994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.628 [2024-12-05 14:19:07.811022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.628 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.811290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.811631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.811663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.811936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.811966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.812322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.812351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.812615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.812644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.813028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.813056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.813440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.813478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.813858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.813893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.814261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.814290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.814678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.814708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.815071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.815099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.815493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.815525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.815732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.815761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.816052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.816080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.816468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.816498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.816715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.816744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.817105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.817133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.817475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.817506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.817839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.817870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.818228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.818257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.818503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.818534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.818946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.818975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.819331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.819359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.819714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.819745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.820121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.820151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.820524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.820555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.820947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.820979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.821317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.821347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.821585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.821620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.821989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.822019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.822425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.822468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.822771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.822800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.823045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.823074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.823275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.823303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.823642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.823674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.823771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.823798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.824142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.824170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.629 [2024-12-05 14:19:07.824388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.629 [2024-12-05 14:19:07.824417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.629 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.824857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.824886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.825241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.825270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.825638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.825668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.826023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.826052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.826325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.826354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.826672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.826702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.827109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.827138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.827511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.827542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.827749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.827778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.828104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.828141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.828353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.828383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.828671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.828704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.829076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.829104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.829475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.829506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.829852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.829880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.830266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.830295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.830702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.830732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.831088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.831117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.831490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.831521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.831753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.831783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.832107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.832137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.832499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.832529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.832917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.832946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.833338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.833367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.833711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.833741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.834100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.834130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.834481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.834512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.834942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.834971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.835171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.835200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.835563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.835594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.835844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.835874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.836102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.836133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.836368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.836397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.836742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.836772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.837223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.837252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.837477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.837508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.837763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.837793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.630 [2024-12-05 14:19:07.838147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.630 [2024-12-05 14:19:07.838176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.630 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.838502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.838533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.838913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.838943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.839291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.839319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.839670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.839701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.840064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.840094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.840364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.840393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.840741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.840772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.841151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.841180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.841593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.841624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.842015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.842044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.842412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.842441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.842819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.842854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.843013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.843044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.843284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.843530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.843562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.843800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.843832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.844208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.844237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.844592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.844622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.844985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.845014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.845268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.845296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.845638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.845668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.846060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.846411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.846440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.846816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.846846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.847214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.847244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.847634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.847665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.847895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.847923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.848283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.848311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.848629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.848660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.849041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.849070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.849400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.849427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.849796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.849826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.850188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.850218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.850625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.631 [2024-12-05 14:19:07.850654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.631 qpair failed and we were unable to recover it. 00:29:01.631 [2024-12-05 14:19:07.851019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.851048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.851296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.851326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.851674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.851704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.852069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.852098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.852474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.852506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.852847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.852876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.853254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.853283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.853385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.853415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Read completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 Write completed with error (sct=0, sc=8) 00:29:01.632 starting I/O failed 00:29:01.632 [2024-12-05 14:19:07.854239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.632 [2024-12-05 14:19:07.854783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.854900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.855309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.855347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.855835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.855936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.856395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.856432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.856871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.856903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.857274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.857303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.857567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.857598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.857850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.857880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.858244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.858273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.858544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.858581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.858824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.858855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.859209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.859238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.859578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.859609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.859829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.859858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.860215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.860245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.860609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.860638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.861014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.861044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.861333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.861363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.861792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.632 [2024-12-05 14:19:07.861822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.632 qpair failed and we were unable to recover it. 00:29:01.632 [2024-12-05 14:19:07.862202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.862231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.862557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.862588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.862968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.862996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.863367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.863397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.863761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.863793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.864037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.864067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.864478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.864509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.864866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.864895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.865246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.865276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.865610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.865641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.865985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.866021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.866392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.866422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.866854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.866885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.867246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.867275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.867683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.867713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.868078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.868106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.868488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.868519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.868844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.868874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.869242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.869271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.869605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.869636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.870028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.870058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.870461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.870826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.870856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.871266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.871638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.871671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.871996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.872025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.872379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.872408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.872801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.872833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.873041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.873070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.873421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.873449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.873728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.873761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.874101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.874132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.874483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.633 [2024-12-05 14:19:07.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.633 qpair failed and we were unable to recover it. 00:29:01.633 [2024-12-05 14:19:07.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.874937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.875304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.875333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.875537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.875568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.875891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.875920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.876291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.876321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.876589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.876619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.876983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.877012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.877374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.877403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.877615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.877644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.878001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.878030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.878384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.878413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.878623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.878653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.879028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.879057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.879429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.879482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.879830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.879859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.880256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.880616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.880646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.881023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.881059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.881394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.881424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.881657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.881688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.882058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.882087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.882452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.882495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.882857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.882886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.883129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.883161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.883516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.883546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.883772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.883802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.884142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.884171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.884488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.884518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.884860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.884890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.885232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.885262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.885614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.885644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.886011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.886041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.886416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.886446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.886887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.886916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.887194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.887223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.887318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.887347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.887658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.887689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.888048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.634 [2024-12-05 14:19:07.888078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.634 qpair failed and we were unable to recover it. 00:29:01.634 [2024-12-05 14:19:07.888336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.888365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.888726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.888757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.889023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.889052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.889390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.889419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.889787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.889817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.890217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.890245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.890591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.890622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.890855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.890884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.891088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.891116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.891496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.891526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.891929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.891959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.892107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.892136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.892483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.892514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.892840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.892868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.893237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.893265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.893629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.893659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.894026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.894055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.894416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.894444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.894795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.894824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.895202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.895238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.895577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.895607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.895827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.895855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.896090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.896119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.896498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.896528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.896796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.896826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.635 [2024-12-05 14:19:07.897188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.635 [2024-12-05 14:19:07.897216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.635 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.897585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.910 [2024-12-05 14:19:07.897617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.910 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.897824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.910 [2024-12-05 14:19:07.897855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.910 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.910 [2024-12-05 14:19:07.898240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.910 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.898478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.910 [2024-12-05 14:19:07.898508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.910 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.898878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.910 [2024-12-05 14:19:07.898908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.910 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.899286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.910 [2024-12-05 14:19:07.899315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.910 qpair failed and we were unable to recover it. 00:29:01.910 [2024-12-05 14:19:07.899688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.899718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.900046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.900076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.900434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.900473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.900829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.900858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.901158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.901189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.901536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.901567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.901908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.901937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.902301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.902330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.902601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.902631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.902997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.903027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.903260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.903289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.903694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.903938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.903970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.904332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.904362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.904638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.904669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.905022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.905051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.905252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.905281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.905629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.905660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.906023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.906052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.906407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.906436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.906536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.906567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.906869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.906897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.907255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.907284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.907635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.907665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.908032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.908061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.908429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.908467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.908814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.908843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.911 [2024-12-05 14:19:07.909201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.911 [2024-12-05 14:19:07.909236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.911 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.909475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.909505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.909858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.909888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.910131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.910161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.910536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.910566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.910762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.910791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.911152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.911181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.911535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.911565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.911767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.911797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.912166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.912194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.912565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.912595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.912964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.912993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.913369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.913397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.913762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.913793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.914178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.914207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.914561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.914591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.914949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.914977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.915297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.915327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.915591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.915625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.915958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.915988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.916205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.916234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.916584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.916615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.916835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.916864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.917081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.917108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.917253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.917286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.917619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.917649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.918012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.918041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.918401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.918431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.918831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.918861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.912 qpair failed and we were unable to recover it. 00:29:01.912 [2024-12-05 14:19:07.919234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.912 [2024-12-05 14:19:07.919263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.919627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.919658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.920029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.920057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.920436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.920475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.920673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.920702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.921052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.921081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.921176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.921205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.921469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.921502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.921703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.921732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.922092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.922121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.922487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.922517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.922747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.922786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.923135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.923164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.923524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.923897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.923926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.924292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.924321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.924704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.924733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.925069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.925098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.925318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.925347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.925700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.925730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.926083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.926112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.926485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.926515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.926761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.926789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.927168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.927196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.927594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.927624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.927987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.928016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.928376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.928405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.928770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.928800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.929011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.929040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.913 [2024-12-05 14:19:07.929328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.913 [2024-12-05 14:19:07.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.913 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.929708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.929738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.929970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.929998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.930348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.930376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.930751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.930780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.931145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.931174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.931533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.931563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.931960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.931988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.932244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.932274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.932527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.932559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.932871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.932899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.933256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.933284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.933658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.933689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.934024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.934052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.934373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.934401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.934619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.934649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.935102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.935131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.935484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.935515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.935753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.935785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.936161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.936191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.936544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.936575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.936834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.937205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.937239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.937595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.937625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.937848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.937877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.938208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.938236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.938604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.938634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.938992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.939022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.939363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.914 [2024-12-05 14:19:07.939391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.914 qpair failed and we were unable to recover it. 00:29:01.914 [2024-12-05 14:19:07.939771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.939802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.940177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.940205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.940583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.940615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.940966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.940995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.941339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.941367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.941716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.941746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.942104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.942133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.942384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.942413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.942748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.942778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.943004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.943034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.943373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.943402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.943747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.943776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.944011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.944044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.944396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.944425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.944818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.944848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.945253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.945282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.945627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.945657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.945896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.945926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.946324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.946353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.946594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.946623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.946997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.947027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.947251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.947280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.947602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.947633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.947994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.948023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.948383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.948412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.948663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.949054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.949083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.949447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.949485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.915 [2024-12-05 14:19:07.949840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.915 [2024-12-05 14:19:07.949869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.915 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.950210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.950240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.950582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.950613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.950836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.950868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.951216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.951245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.951588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.951625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.951848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.951876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.952085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.952113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.952314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.952343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.952687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.952716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.952974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.953003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.953347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.953376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.953647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.953676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.953915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.953943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.954311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.954681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.954712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.955076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.955105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.955510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.955539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.955907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.955936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.956142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.956174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.956512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.956542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.956924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.956953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.957256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.957285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.957670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.957700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.958058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.916 [2024-12-05 14:19:07.958086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.916 qpair failed and we were unable to recover it. 00:29:01.916 [2024-12-05 14:19:07.958469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.958499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.958870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.958899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.959260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.959288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.959630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.959661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.960027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.960056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.960415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.960443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.960812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.960842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.961194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.961225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.961445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.961484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.961614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.961643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.961890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.961919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.962275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.962303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.962697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.962727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.963090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.963118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.963423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.963452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.963803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.963832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.964098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.964127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.964472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.964502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.964718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.964746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.964983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.965013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.965344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.965378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.965747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.965778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.966136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.966165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.966255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.966284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.966607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.966636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.966790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.966818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.967066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.967095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.967416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.967444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.967694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.967723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.917 qpair failed and we were unable to recover it. 00:29:01.917 [2024-12-05 14:19:07.968078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.917 [2024-12-05 14:19:07.968108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.968350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.968382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.968741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.968771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.969140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.969168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.969552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.969582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.969961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.969990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.970364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.970392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.970751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.970781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.971123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.971151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.971514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.971544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.971916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.971945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.972304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.972333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.972698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.972727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.973077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.973106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.973325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.973354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.973559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.973589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.973953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.973981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.974352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.974380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.974756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.974787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.975140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.975169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.975294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.975322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.975708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.975738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.975969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.975997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.976334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.976362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.976704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.976734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.977096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.977125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.977389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.977418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.977782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.977812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.978153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.978183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.978430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.978468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.918 [2024-12-05 14:19:07.978804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.918 [2024-12-05 14:19:07.978833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.918 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.978942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.978979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.979341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.979371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.979582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.979612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.979845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.979873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.980099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.980128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.980472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.980501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.980862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.980891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.981246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.981275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.981532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.981565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.981796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.981825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.982173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.982201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.982547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.982576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.982889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.982917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.983124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.983153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.983524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.983555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.983752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.983781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.984121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.984150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.984509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.984540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.984912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.984940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.985308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.985337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.985688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.985718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.985931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.985960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.986163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.986191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.986414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.986443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.986832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.986861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.986953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.986980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.987281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.987310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.987623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.987654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.919 qpair failed and we were unable to recover it. 00:29:01.919 [2024-12-05 14:19:07.987891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.919 [2024-12-05 14:19:07.987920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.988276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.988305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.988678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.988708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.989068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.989096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.989445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.989482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.989829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.989858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.990112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.990141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.990478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.990508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.990859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.990887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.991242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.991270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.991616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.991647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.991944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.991973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.992304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.992339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.992690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.992719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.993068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.993097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.993452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.993491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.993802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.993831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.994186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.994215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.994474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.994507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.994868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.994987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.995015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.995342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.995370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.995736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.995766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.995934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.995966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.996324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.996353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.996757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.996787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.997138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.997168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.997485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.997515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.997860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.997889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.920 [2024-12-05 14:19:07.998246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.920 [2024-12-05 14:19:07.998274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.920 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:07.998615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:07.998645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:07.998872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:07.998900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:07.999249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:07.999278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:07.999440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:07.999480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:07.999798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:07.999827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.000025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.000054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.000426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.000480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.000815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.000843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.001202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.001231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.001478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.001511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.001768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.001797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.002212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.002241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.002576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.002606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.003004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.003032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.003325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.003355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.003733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.003763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.004098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.004128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.004492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.004521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.004756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.004784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.005138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.005166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.005544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.005574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.005802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.005830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.006264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.006508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.006539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.006899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.006927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.007122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.007151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.007356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.007385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.007735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.007765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.921 qpair failed and we were unable to recover it. 00:29:01.921 [2024-12-05 14:19:08.008094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.921 [2024-12-05 14:19:08.008124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.008477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.008508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.008853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.008882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.009144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.009172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.009544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.009574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.009953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.009981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.010339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.010368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.010757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.010787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.011257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.011628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.011659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.012005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.012033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.012276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.012309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.012673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.012703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.013052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.013080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.013294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.013326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.013673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.013703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.013932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.013961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.014309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.014337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.014697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.014728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.014940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.014973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.015325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.015354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.015691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.015727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.016094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.016123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.922 qpair failed and we were unable to recover it. 00:29:01.922 [2024-12-05 14:19:08.016483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.922 [2024-12-05 14:19:08.016512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.016850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.016879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.017258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.017286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.017540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.017569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.017979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.018008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.018357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.018386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.018755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.018786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.019153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.019181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.019556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.019586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.019970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.019999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.020368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.020396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.020744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.020774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.021143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.021172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.021521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.021552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.021900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.021929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.022301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.022330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.022546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.022576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.022790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.022818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.023188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.023216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.023432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.023468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.023852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.023881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.024242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.024270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.024630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.024661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.024984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.025014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.025401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.025761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.025791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.026155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.026184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.026390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.026418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.026674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.026705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-12-05 14:19:08.027069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.923 [2024-12-05 14:19:08.027099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.027440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.027482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.027736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.027765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.027961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.027991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.028346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.028374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.028731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.028762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.028969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.028998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.029343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.029372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.029597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.029631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.029988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.030023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.030224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.030254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.030612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.030643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.031012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.031040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.031397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.031425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.031784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.031814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.032197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.032226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.032432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.032470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.032712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.032742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.032983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.033011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.033306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.033335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.033536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.033568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.033889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.033918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.034295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.034324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.034598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.034629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.034859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.034891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.035253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.035282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.035639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.035669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.036041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.036070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.036471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.036501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-12-05 14:19:08.036846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.924 [2024-12-05 14:19:08.036875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.037215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.037243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.037348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.037380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.037754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.037785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.038043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.038076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.038425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.038464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.038702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.038731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.038942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.038973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.039277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.039306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.039578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.039608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.039949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.039977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.040256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.040285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.040630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.040659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.040937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.040967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.041272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.041301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.041512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.041542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.041944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.041973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.042323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.042701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.042731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.043094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.043122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.043296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.043331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.043714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.044038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.044067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.044306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.044335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.044541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.044572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.044939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.044968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.045330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.045359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.045733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.045763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-12-05 14:19:08.046119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.925 [2024-12-05 14:19:08.046147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.046510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.046540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.046884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.046912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.047271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.047300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.047677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.047707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.047914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.047943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.048147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.048175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.048545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.048575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.048907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.048936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.049150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.049179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.049525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.049556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.049930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.049959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.050309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.050338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.050688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.050719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.051085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.051114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.051473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.051503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.051870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.051899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.052155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.052183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.052540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.052569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.052957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.052986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.053199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.053227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.053442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.053490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.053700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.053729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.053819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.053847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.054174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.054202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.054568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.054599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.054970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.054999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.926 [2024-12-05 14:19:08.055197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.926 [2024-12-05 14:19:08.055226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.926 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.055574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.055605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.055924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.055953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.056194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.056227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.056596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.056627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.056981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.057017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.057353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.057382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.057770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.057800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.058177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.058206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.058419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.058451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.058836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.058865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.059242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.059271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.059633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.059664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.059920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.059952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.060300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.060330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.060680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.060710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.061033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.061062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.061428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.061478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.061686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.061715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.061928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.061957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.062339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.062368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.062716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.062746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.063111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.063140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.063503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.063533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.063892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.063921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.064279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.064308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.064786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.064815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.065166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.927 qpair failed and we were unable to recover it. 00:29:01.927 [2024-12-05 14:19:08.065559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.927 [2024-12-05 14:19:08.065589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.065948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.065977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.066334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.066363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.066573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.066603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.066951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.066980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.067215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.067244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.067604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.067634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.067981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.068009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.068232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.068261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.068666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.068697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.069093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.069122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.069492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.069524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.069879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.069908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.070263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.070292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.070615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.070646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.070991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.071020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.071358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.071398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.071768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.071799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.072123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.072151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.072524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.072555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.072770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.072799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.073018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.073045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.073419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.073448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.073787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.073816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.074166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.928 [2024-12-05 14:19:08.074195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.928 qpair failed and we were unable to recover it. 00:29:01.928 [2024-12-05 14:19:08.074556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.074586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.074842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.074871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.075142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.075170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.075523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.075553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.075890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.075919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.076188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.076216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.076603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.076633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.077005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.077034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.077397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.077426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aac000b90 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.078011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.078107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.078564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.078604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.078981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.079013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.079307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.079337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.079781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.079875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.080150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.080188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.080545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.080579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.080955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.080984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.081192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.081221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.081593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.081626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.081873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.081902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.082166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.082195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.082538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.082571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.082815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.082851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.083181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.083211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.083575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.083607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.083697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.083724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.084068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.084096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.084470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.929 [2024-12-05 14:19:08.084500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.929 qpair failed and we were unable to recover it. 00:29:01.929 [2024-12-05 14:19:08.084692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.084721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.084931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.084961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.085199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.085229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.085582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.085613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.086000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.086029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.086355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.086383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.086747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.086778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.086993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.087022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.087364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.087392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.087747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.087777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.088138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.088167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.088518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.088548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.088919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.088948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.089317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.089346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.089705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.089735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.090104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.090133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.090492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.090522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.090860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.090895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.091111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.091140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.091425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.091465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.091844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.091874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.092124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.092501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.092531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.092904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.092932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.093253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.093282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.093631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.093662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.094026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.094055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.094294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.930 [2024-12-05 14:19:08.094322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.930 qpair failed and we were unable to recover it. 00:29:01.930 [2024-12-05 14:19:08.094691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.094722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.095105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.095134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.095464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.095493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.095591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.095619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cc0c0 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Read completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 Write completed with error (sct=0, sc=8) 00:29:01.931 starting I/O failed 00:29:01.931 [2024-12-05 14:19:08.096363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:01.931 [2024-12-05 14:19:08.096902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.097010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.097429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.097489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.097989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.098080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.098516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.098558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.098954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.098985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.099315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.099356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.099833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.099926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.100364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.100403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.100649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.100682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.101024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.101054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.101276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.101314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.101553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.101584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.101830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.931 [2024-12-05 14:19:08.101860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.931 qpair failed and we were unable to recover it. 00:29:01.931 [2024-12-05 14:19:08.102223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.102252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.102593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.102625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.102945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.102974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.103339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.103367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.103664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.103695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.104010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.104039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.104285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.104314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.104657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.104688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.105044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.105075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.105438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.105477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.105793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.105823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.106039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.106068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.106313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.106342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.106727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.106758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.107023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.107052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.107387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.107418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.107625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.107655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.107972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.108001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.108252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.108284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.108636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.108668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.108988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.109017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.109379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.109408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.109646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.109676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.110055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.110084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.110330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.110359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.110718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.110749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.111109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.111138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.111508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.111538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.111959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.932 [2024-12-05 14:19:08.111988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.932 qpair failed and we were unable to recover it. 00:29:01.932 [2024-12-05 14:19:08.112201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.112229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.112542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.112571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.112964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.112993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.113341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.113376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.113709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.113738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.114091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.114120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.114474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.114505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.114765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.114799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.114995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.115025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.115374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.115404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.115637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.115671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.116007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.116037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.116371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.116400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.116746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.116777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.116946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.116975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.117354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.117384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.117765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.117796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.118169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.118199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.118546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.118578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.118932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.118962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.119227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.119256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.119593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.119624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.119844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.119874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.119992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.120021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.120390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.120419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.120798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.120829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.121184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.121213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.121530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.933 [2024-12-05 14:19:08.121560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.933 qpair failed and we were unable to recover it. 00:29:01.933 [2024-12-05 14:19:08.121881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.121910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.122246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.122275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.122651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.122983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.123013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.123360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.123390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.123637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.123668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.123921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.123954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.124179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.124208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.124301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.124329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.124663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.124692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.125024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.125053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.125384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.125413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.125698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.125728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.126083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.126112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.126494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.126526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.126890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.126926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.127262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.127292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.127619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.127649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.127866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.127897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.128266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.128295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.128606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.128636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.128870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.128898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.129259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.129287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.129660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.129690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.129928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.129957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.130305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.130334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.934 qpair failed and we were unable to recover it. 00:29:01.934 [2024-12-05 14:19:08.130685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.934 [2024-12-05 14:19:08.130715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.131069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.131098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.131426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.131466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.131826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.131858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.132184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.132212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.132564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.132595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.132951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.132980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.133336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.133365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.133744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.133774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.133979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.134008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.134320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.134348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.134548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.134578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.134944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.134973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.135329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.135581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.135611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.135839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.135867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.136239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.136269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.136483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.136513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.136860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.136889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.137208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.137237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.137449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.137491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.137870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.137900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.138259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.138288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.138640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.138670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.138931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.138960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.139330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.139359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.139692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.139722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.140080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.935 [2024-12-05 14:19:08.140108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.935 qpair failed and we were unable to recover it. 00:29:01.935 [2024-12-05 14:19:08.140473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.140504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.140747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.140783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.141056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.141085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.141436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.141476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.141752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.141782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.142028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.142057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.142385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.142414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.142775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.142806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.143128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.143157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.143511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.143542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.143881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.143910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.144107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.144136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.144492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.144522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.144844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.144873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.145229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.145257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.145612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.145643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.146002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.146031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.146387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.146415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.146795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.146826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.147052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.147081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.147318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.147348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.147720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.147752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.148103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.148132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.148487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.148517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.148709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.148739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.149099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.149128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.149431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.149468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.149745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.936 [2024-12-05 14:19:08.149774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.936 qpair failed and we were unable to recover it. 00:29:01.936 [2024-12-05 14:19:08.150062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.150092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.150439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.150475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.150819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.150848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.151206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.151235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.151532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.151562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.151884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.151913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.152274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.152303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.152531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.152561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.152904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.152933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.153135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.153164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.153423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.153451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.153780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.153809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.154180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.154208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.154561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.154598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.154807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.154836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.155198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.155228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.155437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.155478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.155800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.155828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.156196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.156225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.156594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.156624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.156975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.157004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.157360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.157389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.157598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.157628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.157973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.158002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.158250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.158283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.158610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.158640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.159001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.159030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.159283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.159312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.159534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.159564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.937 [2024-12-05 14:19:08.159882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.937 [2024-12-05 14:19:08.159912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.937 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.160253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.160282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.160637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.160667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.160863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.160892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.161255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.161284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.161638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.161668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.161877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.161906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.162267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.162295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.162688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.162719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.163059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.163088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.163444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.163483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.163765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.163795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.164134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.164163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.164372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.164405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.164748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.164780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.165001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.165030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.165373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.165402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.165758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.165788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.166149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.166177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.166537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.166569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.166934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.166963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.167105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.167133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.167374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.167408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.167783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.167814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.168168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.168203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.168399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.168428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.168772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.168802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.169159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.169187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.169543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.169573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.169798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.169827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.170172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.938 [2024-12-05 14:19:08.170201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.938 qpair failed and we were unable to recover it. 00:29:01.938 [2024-12-05 14:19:08.170562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.170592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.170921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.170950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.171304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.171333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.171702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.171733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.172093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.172123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.172478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.172508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.172709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.172739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.173089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.173119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.173340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.173369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.173697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.173727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.173935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.173964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.174308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.174337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.174675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.174707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.175076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.175104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.175447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.175502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.175729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.175758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.176092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.176121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.176340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.176369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.176734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.176766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.177118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.177514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.177544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.177864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.178131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.178163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.178435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.178485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.178838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.178867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.179199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.179228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.179583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.179614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.179980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.180009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.939 qpair failed and we were unable to recover it. 00:29:01.939 [2024-12-05 14:19:08.180416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.939 [2024-12-05 14:19:08.180445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.180794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.180824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.181179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.181207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.181567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.181598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.181938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.181967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.182227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.182262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.182593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.182624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.182973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.183002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.183373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.183402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.183742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.183772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.184133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.184162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.184534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.184565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.184904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.184932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.185163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.185192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.185536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.185566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.185751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.185779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.186157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.186187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.186531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.186561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.186923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.186952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.187309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.187338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.187683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.187713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.188067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.188096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.188306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.188336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.188581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.188612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.188993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.189022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.189216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.189245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.940 qpair failed and we were unable to recover it. 00:29:01.940 [2024-12-05 14:19:08.189583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.940 [2024-12-05 14:19:08.189613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.189940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.189969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.190334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.190363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.190710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.190741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.191100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.191129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.191472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.191503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.191857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.191887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.192138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.192168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.192506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.192537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.192857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.192886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.193269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.193298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.193644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.193674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.194062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.194091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.194409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.194441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:01.941 [2024-12-05 14:19:08.194657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:01.941 [2024-12-05 14:19:08.194686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:01.941 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.195101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.195133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.195481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.195513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.195849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.195878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.196237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.196267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.196624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.196662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.197008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.197038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.197316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.197670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.197701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.197902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.197934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.198290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.198320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.198617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.198647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.198965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.198994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.199349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.199379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.199581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.216 [2024-12-05 14:19:08.199611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.216 qpair failed and we were unable to recover it. 00:29:02.216 [2024-12-05 14:19:08.199972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.200002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.200218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.200249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.200495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.200526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.200844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.200873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.201229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.201259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.201601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.201631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.201952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.201981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.202245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.202274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.202595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.202625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.202978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.203007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.203380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.203707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.203738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.204080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.204110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.204328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.204357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.204765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.204797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.205152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.205182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.205293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.205327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.205657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.205688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.205907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.205936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.206277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.206307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.206626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.206655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.207011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.207040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.207137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.207166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.207596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.207627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.207986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.208015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.217 [2024-12-05 14:19:08.208231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.217 [2024-12-05 14:19:08.208264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.217 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.208625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.208657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.208891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.208920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.209157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.209186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.209563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.209594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.209927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.210311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.210340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.210682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.210712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.211069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.211098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.211338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.211372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.211709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.211740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.212067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.212097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.212452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.212492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.212826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.212856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.213070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.213102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.213403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.213432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.213717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.213748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.213985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.214014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.214357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.214387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.214751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.214783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.215041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.215069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.215426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.215465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.215837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.215866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.216264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.216293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.216505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.216536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.216889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.216918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.217259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.217288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.217639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.217669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.218 [2024-12-05 14:19:08.218034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.218 [2024-12-05 14:19:08.218063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.218 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.218419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.218448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.218802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.218832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.219064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.219093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.219326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.219355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.219709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.219739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.219955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.219984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.220357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.220609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.220639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.221005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.221034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.221388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.221417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.221823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.221855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.222176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.222204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.222408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.222436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.222812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.222843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.223186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.223215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.223436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.223475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.223838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.223874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.224229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.224259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.224630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.224661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.225024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.225053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.225406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.225435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.225689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.225719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.226047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.226076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.226285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.226313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.226524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.226554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.226926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.226955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.227285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.227314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.219 [2024-12-05 14:19:08.227598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.219 [2024-12-05 14:19:08.227628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.219 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.227970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.227999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.228359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.228388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.228748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.228778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.229138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.229168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.229288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.229317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.229688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.229717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.230075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.230104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.230450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.230488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.230876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.230905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.231111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.231140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.231474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.231505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.231714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.231742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.232075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.232104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.232467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.232498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.232834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.232863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.233218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.233254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.233492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.233523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.233935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.233964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.234331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.234360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.234716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.234746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.235091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.235121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.235476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.235506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.235727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.235755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.236134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.236163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.236590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.236921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.236951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.237300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.237329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.220 [2024-12-05 14:19:08.237545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.220 [2024-12-05 14:19:08.237576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.220 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.237916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.237945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.238310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.238339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.238660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.238691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.239086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.239115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.239471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.239502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.239863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.239894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.240244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.240273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.240600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.240630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.240959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.240988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.241344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.241373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.241673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.241704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.242057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.242085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.242445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.242492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.242831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.242860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.243086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.243115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.243312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.243341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.243564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.243594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.243927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.243956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.244304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.244334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.244676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.244706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.245058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.245087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.245451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.245491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.245884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.245913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.246279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.246311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.246626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.246657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.247018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.247048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.247283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.247311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.247547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.247588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.247945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.221 [2024-12-05 14:19:08.248184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.221 [2024-12-05 14:19:08.248212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.221 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.248554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.248584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.248912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.248941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.249300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.249330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.249669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.249699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.250027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.250056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.250414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.250444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.250771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.250800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.251168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.251197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.251530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.251561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.251920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.251949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.252301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.252330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.252761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.252791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.252994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.253024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.253387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.253416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.253753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.253783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.254139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.254169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.254515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.254547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.254888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.254917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.255248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.255277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.255571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.255603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.255836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.255865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.256205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.256234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.256429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.256466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.256699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.256728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.257087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.257116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.222 qpair failed and we were unable to recover it. 00:29:02.222 [2024-12-05 14:19:08.257340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.222 [2024-12-05 14:19:08.257370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.257723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.257753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.258149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.258177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.258529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.258558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.258764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.258793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.259145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.259173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.259534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.259566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.259897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.259926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.260280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.260309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.260658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.260688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.260879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.260908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.261249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.261278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.261517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.261552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.261915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.261944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.262225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.262255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.262601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.262632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.262986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.263015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.263369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.263397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.263753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.263783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.264136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.264165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.264425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.264462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.264795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.264824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.265181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.265211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.265435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.265474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.265563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.265592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.265932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.265963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.266294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.266323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.266561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.266593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.266793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.266823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.267031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.267060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.223 [2024-12-05 14:19:08.267464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.223 [2024-12-05 14:19:08.267495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.223 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.267838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.267868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.268215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.268244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.268452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.268489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.268895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.268924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.269172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.269202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.269518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.269548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.269741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.270147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.270177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.270498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.270529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.270868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.270897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.271254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.271283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.271636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.271666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.271888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.271916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.272244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.272273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.272476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.272506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.272816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.272845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.273180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.273209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.273546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.273576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.273895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.273924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.274281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.274309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.274617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.274648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.274872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.274907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.275220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.275248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.275602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.275632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.276002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.276331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.276360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.276709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.276739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.276965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.276995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.277319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.277348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.277747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.277777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.224 [2024-12-05 14:19:08.278124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.224 [2024-12-05 14:19:08.278153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.224 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.278349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.278377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.278728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.278760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.279002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.279030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.279295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.279324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.279616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.279647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.279964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.279993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.280335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.280365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.280519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.280549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.280919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.280948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.281303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.281332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.281553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.281582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.281927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.281955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.282160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.282188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.282525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.282555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.282799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.282828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.283190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.283219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.283543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.283841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.283870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.284072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.284100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.284431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.284484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.284763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.284792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.285019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.285049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.285402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.285430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.285654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.285684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.286045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.286074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.286437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.286488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.286831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.286860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.287200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.287229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.287630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.287661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.288019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.225 [2024-12-05 14:19:08.288048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.225 qpair failed and we were unable to recover it. 00:29:02.225 [2024-12-05 14:19:08.288408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.288443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.288787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.288816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.289156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.289185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.289540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.289571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.289833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.289861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.290193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.290224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.290471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.290502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.290861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.290891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.291177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.291206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.291553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.291584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.291786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.291815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.292170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.292199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.292422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.292451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.292816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.292845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.293187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.293217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.293565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.293595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.293923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.293952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.294308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.294337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.294672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.294703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.294956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.294985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.295324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.295352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.295698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.295729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.296054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.296083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.296436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.296473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.296811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.296841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.297194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.297563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.297593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.297951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.297981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.298338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.298367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.298618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.298652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.299003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.299032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.299387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.299416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.226 qpair failed and we were unable to recover it. 00:29:02.226 [2024-12-05 14:19:08.299635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.226 [2024-12-05 14:19:08.299665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.300019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.300048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.300404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.300432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.300786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.300816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.301168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.301197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.301568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.301598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.301951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.301981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.302352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.302381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.302648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.302684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.303027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.303056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.303262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.303292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.303615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.303646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.303843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.303872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.304228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.304612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.304642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.304997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.305025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.305380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.305409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.305786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.305817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.306038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.306066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.306421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.306449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.306775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.306804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.307151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.307180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.307419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.307897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.307926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.308259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.308288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.308624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.308655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.308972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.309000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.309369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.309397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.309626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.309657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.309972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.310001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.310243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.310276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.310615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.310645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.310966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.227 [2024-12-05 14:19:08.310995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.227 qpair failed and we were unable to recover it. 00:29:02.227 [2024-12-05 14:19:08.311330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.311359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.311574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.311604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.311830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.311859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.312137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.312170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.312500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.312531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.312767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.312796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.313141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.313170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.313484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.313514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.313850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.313879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.314107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.314136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.314542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.314573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.314937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.314966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.315291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.315319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.315658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.315688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.316055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.316084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.316305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.316345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.316680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.316710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.316928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.316957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.317154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.317182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.317392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.317421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.317815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.317846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.318217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.318245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.318600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.318630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.318959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.318989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.319346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.319375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.319735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.319765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.320133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.320162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.320534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.320565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.228 [2024-12-05 14:19:08.320850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.228 [2024-12-05 14:19:08.320880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.228 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.321114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.321353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.321382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.321749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.321779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.322004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.322033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.322382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.322410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.322750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.322780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.323137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.323167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.323525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.323555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.323921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.323950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.324306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.324335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.324681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.324712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.325068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.325097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.325449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.325496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.325826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.325856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.326221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.326250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.326481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.326511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.326750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.326780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.327137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.327166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.327530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.327560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.327761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.327791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.328165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.328194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.328564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.328595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.328944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.328973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.329310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.329338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.329690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.329721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.330060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.330090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.330465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.330501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.330835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.330864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.331225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.331254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.331616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.331647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.331886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.331915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.332267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.332296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.332629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.332659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.229 [2024-12-05 14:19:08.332851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.229 [2024-12-05 14:19:08.332881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.229 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.333234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.333262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.333649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.333679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.334028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.334056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.334419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.334448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.334765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.334793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.335151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.335180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.335537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.335568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.335919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.336176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.336208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.336418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.336448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.336639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.336669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.336879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.336908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.337254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.337283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.337503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.337537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.337777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.337807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.338158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.338187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.338496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.338527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.338844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.338873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.339215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.339244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.339474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.339506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.339765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.339797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.340208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.340237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.340570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.340600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.340804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.340833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.341144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.341172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.341524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.341554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.341919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.341948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.342297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.342326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.342728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.342757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.343187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.343216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.343431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.343467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.343844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.343873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.344243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.344278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.344617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.344647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.345011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.345039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.345389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.345418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.345666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.345696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.345943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.345971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.346303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.346331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.346616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.346647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.346990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.347018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.347187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.347216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.347570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.347600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.347921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.347949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.348183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.348212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.348550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.348579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.348815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.348848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.230 [2024-12-05 14:19:08.349052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.230 [2024-12-05 14:19:08.349080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.230 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.349450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.349488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.349715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.349744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.350004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.350032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.350338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.350366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.350630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.350661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.351023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.351052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.351249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.351278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.351580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.351610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.351952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.351980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.352329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.352357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.352618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.352953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.352982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.353212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.353241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.353480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.353510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.353842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.353871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.354071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.354100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.354357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.354385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.354707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.354738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.354895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.354924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.355281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.355310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.355645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.355675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.355903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.355931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.356284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.356312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.356536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.356565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.356925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.356959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.357236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.357264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.357475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.357504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.357832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.357860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.358206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.358234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.358441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.358489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.358819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.358847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.359190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.359218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.359579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.359609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.359854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.359883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.360208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.360236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.360592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.360621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.360870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.360898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.361168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.361196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.361559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.361589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.361821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.361850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.362054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.362082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.362446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.362490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.362720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.362748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.363101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.363130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.363487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.363517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.363858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.363886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.364122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.364154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.364493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.364523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.364859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.364887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.365253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.231 [2024-12-05 14:19:08.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.231 qpair failed and we were unable to recover it. 00:29:02.231 [2024-12-05 14:19:08.365531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.365561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.365905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.366278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.366307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.366672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.366702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.366960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.366989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.367392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.367421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.367788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.367819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.368150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.368179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.368529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.368560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.368786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.368817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.369210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.369239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.369430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.369467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.369806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.369834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.370053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.370082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.370422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.370473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.370825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.370855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.371084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.371113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.371353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.371381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.371732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.371762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.371868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.371900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.372258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.372287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.372525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.372559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.372922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.372950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.373302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.373331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.373436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.373471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.373819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.373847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.374200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.374229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.374577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.374607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.374843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.374872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.375101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.375130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.375345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.375377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.375696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.375726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.375816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.375844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa4000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.376365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.376476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.376877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.376915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.232 [2024-12-05 14:19:08.377268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.377298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:02.232 [2024-12-05 14:19:08.377573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.377622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.232 [2024-12-05 14:19:08.377958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.377988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.232 [2024-12-05 14:19:08.378204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.378233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.232 [2024-12-05 14:19:08.378521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.378553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.378784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.378812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.379058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.379088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.232 [2024-12-05 14:19:08.379466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.232 [2024-12-05 14:19:08.379497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.232 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.379822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.379853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.380182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.380211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.380517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.380547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.380771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.380801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.380923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.380953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.381321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.381350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.381754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.381785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.382143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.382172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.382315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.382343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.382708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.382751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.383073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.383103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.383440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.383478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.383697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.383730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.384072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.384102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.384479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.384510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.384823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.384852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.385185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.385215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.385567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.385599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.385827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.385857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.386167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.386196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.386532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.386562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.386785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.386814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.387132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.387162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.387514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.387545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.387935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.387964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.388290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.388320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.388537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.388567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.388785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.388814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.389161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.389191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.389483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.389514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.389722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.389751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.389989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.390018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.390254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.390283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.390514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.390544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.390887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.390917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.391239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.391268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.391525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.391563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.391837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.391867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.392205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.392234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.392572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.392602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.392916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.392945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.393296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.393325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.393688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.393718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.394059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.394090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.394414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.394443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.394794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.394824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.395174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.395203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.395568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.395598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.395920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.395949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.396286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.233 [2024-12-05 14:19:08.396321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.233 qpair failed and we were unable to recover it. 00:29:02.233 [2024-12-05 14:19:08.396673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.396705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.397026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.397055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.397334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.397364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.397705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.397736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.398045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.398074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.398430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.398467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.398766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.398796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.399146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.399176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.399423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.399463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.399812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.399842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.400207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.400237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.400587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.400618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.400969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.400998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.401338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.401368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.401693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.401724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.402044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.402074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.402422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.402452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.402657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.402687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.403036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.403065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.403314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.403343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.403680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.403710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.404038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.404067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.404408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.404437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.404787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.404817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.405168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.405197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.405434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.405480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.405843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.405875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.406225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.406255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.406488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.406518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.406875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.406905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.407260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.407289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.407542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.407571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.407903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.407934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.408161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.408191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.408538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.408567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.408873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.408901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.409240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.409269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.409631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.409661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.409977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.410006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.410237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.410272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.410532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.410566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.410944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.410974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.411324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.411354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.411703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.411733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.411956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.411984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.412327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.412355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.412619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.412650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.413000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.413031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.413378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.234 [2024-12-05 14:19:08.413757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.234 [2024-12-05 14:19:08.413787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.234 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.414030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.414061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.414264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.414292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.414633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.414664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.414992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.415022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.415230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.415258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.415596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.415627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.415842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.415875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.416245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.416274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.416607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.416638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.416975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.417005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.417217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.417250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.417494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.417526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.417837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.417867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.235 [2024-12-05 14:19:08.418220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.418251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:02.235 [2024-12-05 14:19:08.418602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.418633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.235 [2024-12-05 14:19:08.418986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.419017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.235 [2024-12-05 14:19:08.419365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.419750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.419780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.420141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.420171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.420408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.420438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.420571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.420599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.420923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.420951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.421329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.421358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.421594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.421627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.421885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.421917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.422130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.422159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.422573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.422604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.422819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.422855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.423053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.423082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.423440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.423477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.423799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.235 [2024-12-05 14:19:08.423828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.235 qpair failed and we were unable to recover it. 00:29:02.235 [2024-12-05 14:19:08.423976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.424004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.424336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.424365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.424787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.424818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.425152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.425180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.425449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.425485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.425808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.425838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.426194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.426224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.426557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.426588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.426799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.426830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.427162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.427192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.427546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.427577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.427934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.427962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.428284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.428313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.428682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.428711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.428981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.429010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.429351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.429380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.429721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.429751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.430128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.430158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.430492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.430523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.430805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.430833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.431172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.431201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.431516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.431546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.431875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.431904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.432237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.432267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.432627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.432658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.432984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.433013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.433374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.433403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.433762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.433793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.434151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.434180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.434512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.434543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.434794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.434822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.435123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.435152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.435505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.435535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.236 [2024-12-05 14:19:08.435855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.236 [2024-12-05 14:19:08.435883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.236 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.436093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.436122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.436471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.436501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.436850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.437230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.437259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.437607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.437637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.438045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.438393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.438422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.438775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.438806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.439165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.439194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.439540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.439571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.439919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.439948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.440299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.440328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.440559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.440589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.440935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.440964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.441325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.441679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.441709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.441922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.441952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.442299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.442327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.442557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.442587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.442840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.442872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.443204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.443234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.443594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.443625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.443981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.444010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.444360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.444389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.444769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.444799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.445132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.445161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.445495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.445526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.445761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.445791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.446025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.446053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 Malloc0 00:29:02.237 [2024-12-05 14:19:08.446375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.446404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.446751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.446781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.447002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.237 [2024-12-05 14:19:08.447031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 [2024-12-05 14:19:08.447247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.447275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.237 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:02.237 [2024-12-05 14:19:08.447479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.237 [2024-12-05 14:19:08.447509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.237 qpair failed and we were unable to recover it. 00:29:02.238 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.238 [2024-12-05 14:19:08.447747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.447777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.238 [2024-12-05 14:19:08.447972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.448001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.448355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.448384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.448751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.448781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.449134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.449163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.449479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.449509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.449755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.449788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.450028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.450058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.450392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.450422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.450773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.450804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.451056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.451088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.451295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.451324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.451703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.451734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.452095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.452124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.452414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.452443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.452841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.452871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.453222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.453250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.453603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.453633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.453657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.238 [2024-12-05 14:19:08.453825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.453854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.454047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.454076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.454402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.454431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.454775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.454806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.455019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.455047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.455364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.455393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.455718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.455749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.455906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.455936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.456294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.456324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.456655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.456685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.456947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.456979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.457305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.457334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.457689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.457719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.458083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.458111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.458470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.458501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.238 [2024-12-05 14:19:08.458747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.238 [2024-12-05 14:19:08.458777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.238 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.459122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.459151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.459515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.459546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.459920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.459949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.460157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.460185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.460523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.460553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.460943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.460972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.461320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.461348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.461690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.461720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.461892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.462283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.462313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.239 [2024-12-05 14:19:08.462667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.462697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:02.239 [2024-12-05 14:19:08.463014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.463049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.463190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.463220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.239 [2024-12-05 14:19:08.463492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.239 [2024-12-05 14:19:08.463742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.463771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.464015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.464044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.464353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.464382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.464726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.464755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.465012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.465041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.465394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.465422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.465825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.465856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.466184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.466212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.466496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.466526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.466873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.466902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.467165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.467195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.467432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.467471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.467816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.467844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.468198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.468226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.468370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.468401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.468799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.468829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.469159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.469189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.469518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.469548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.469879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.239 [2024-12-05 14:19:08.469907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.239 qpair failed and we were unable to recover it. 00:29:02.239 [2024-12-05 14:19:08.470121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.470150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.470500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.470530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.470769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.470798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.471031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.471060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.471390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.471425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.471639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.471671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.472031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.472060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.472416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.472445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.472793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.472822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.473180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.473209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.473478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.473509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.473631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.473662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.473769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.473801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.474128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.474158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.474330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.474358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.474716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.474747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.475004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.475033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:02.240 [2024-12-05 14:19:08.475356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.475386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.240 [2024-12-05 14:19:08.475583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.475614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.240 [2024-12-05 14:19:08.475963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.475993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.476207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.476238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.476476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.476507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.476813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.476843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.477066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.477419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.477448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.477694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.477724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.478076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.478105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.478488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.478520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.478835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.478865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.240 [2024-12-05 14:19:08.479219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.240 [2024-12-05 14:19:08.479249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.240 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.479594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.479624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.479852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.479881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.480236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.480265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.480500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.480529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.480721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.480749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.481142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.481171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.481521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.481550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.481954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.481983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.482331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.482360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.482716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.482747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.483108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.483137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.483490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.483520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.483841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.483876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.484217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.484246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.484438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.484487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.484850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.484879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.485222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.485251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.485585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.485615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.485817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.485846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.486203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.486232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.486561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.486592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.241 [2024-12-05 14:19:08.486908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.486937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.241 [2024-12-05 14:19:08.487269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.487299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.241 [2024-12-05 14:19:08.487542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.487572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.241 [2024-12-05 14:19:08.487949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.487978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.488335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.488364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.488727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.488757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.489093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.489121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.489350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.489382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.489767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.489798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.490159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.490188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.490535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.490566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.241 [2024-12-05 14:19:08.490919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.241 [2024-12-05 14:19:08.490948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.241 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.491270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.491299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.491697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.491726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.492048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.492077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.492306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.492335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.492695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.492731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.493083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.493112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.493326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.493355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.493689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.242 [2024-12-05 14:19:08.493719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2aa0000b90 with addr=10.0.0.2, port=4420 00:29:02.242 qpair failed and we were unable to recover it. 00:29:02.242 [2024-12-05 14:19:08.493937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.242 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.242 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.242 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.505 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:02.505 [2024-12-05 14:19:08.504636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.504785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.504827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.504847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.504866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.504916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.505 14:19:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2915639 00:29:02.505 [2024-12-05 14:19:08.514534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.514618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.514642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.514656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.514668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.514694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.524392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.524478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.524502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.524515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.524526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.524552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.534550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.534613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.534629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.534637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.534645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.534663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.544540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.544595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.544608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.544615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.544621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.544636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.554547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.554603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.554617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.554624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.554630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.554644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.564403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.564451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.564474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.564481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.564487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.564502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.574590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.574647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.574660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.574667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.574673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.574688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.584654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.584708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.584721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.584728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.584735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.584749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.594663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.505 [2024-12-05 14:19:08.594717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.505 [2024-12-05 14:19:08.594730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.505 [2024-12-05 14:19:08.594737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.505 [2024-12-05 14:19:08.594743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.505 [2024-12-05 14:19:08.594757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.505 qpair failed and we were unable to recover it. 00:29:02.505 [2024-12-05 14:19:08.604624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.604672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.604685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.604692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.604698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.604716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.614600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.614654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.614667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.614674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.614680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.614694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.624607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.624662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.624676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.624683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.624689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.624703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.634762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.634847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.634859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.634866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.634873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.634887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.644724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.644772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.644787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.644794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.644800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.644814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.654794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.654852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.654865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.654872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.654878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.654892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.664834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.664894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.664907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.664913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.664920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.664934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.674857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.674909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.674922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.674929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.674935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.674949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.684845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.684894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.684908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.684915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.684921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.684935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.694925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.694984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.695001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.695008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.695014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.695028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.704960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.705016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.705029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.705035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.705042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.705056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.715006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.715063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.715076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.715083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.715089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.715103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.724958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.506 [2024-12-05 14:19:08.725004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.506 [2024-12-05 14:19:08.725017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.506 [2024-12-05 14:19:08.725024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.506 [2024-12-05 14:19:08.725031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.506 [2024-12-05 14:19:08.725046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.506 qpair failed and we were unable to recover it. 00:29:02.506 [2024-12-05 14:19:08.735129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.735191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.735204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.735212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.735225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.735239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.507 [2024-12-05 14:19:08.745106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.745168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.745181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.745189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.745195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.745209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.507 [2024-12-05 14:19:08.755121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.755179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.755193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.755200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.755206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.755220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.507 [2024-12-05 14:19:08.765073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.765122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.765136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.765143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.765150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.765164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.507 [2024-12-05 14:19:08.775168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.775228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.775241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.775248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.775255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.775268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.507 [2024-12-05 14:19:08.785153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.785217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.785231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.785238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.785244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.785258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.507 [2024-12-05 14:19:08.795189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.507 [2024-12-05 14:19:08.795283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.507 [2024-12-05 14:19:08.795296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.507 [2024-12-05 14:19:08.795303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.507 [2024-12-05 14:19:08.795309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.507 [2024-12-05 14:19:08.795324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.507 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.805149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.805228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.805241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.805248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.805254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.805268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.815222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.815278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.815291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.815298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.815304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.815318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.825282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.825334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.825350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.825357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.825363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.825378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.835315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.835406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.835419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.835426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.835432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.835446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.845293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.845343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.845356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.845363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.845370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.845383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.855348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.855400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.855413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.855420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.855427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.855441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.865396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.865453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.865470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.865480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.865486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.865501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.875418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.875474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.875487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.875494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.875500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.875514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.885401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.885447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.885464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.885471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.885477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.885491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.895475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.770 [2024-12-05 14:19:08.895529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.770 [2024-12-05 14:19:08.895543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.770 [2024-12-05 14:19:08.895550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.770 [2024-12-05 14:19:08.895556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.770 [2024-12-05 14:19:08.895570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.770 qpair failed and we were unable to recover it. 00:29:02.770 [2024-12-05 14:19:08.905473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.905529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.905543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.905550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.905556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.905572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.915568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.915637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.915650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.915657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.915664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.915679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.925508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.925554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.925567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.925574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.925580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.925594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.935526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.935585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.935598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.935605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.935612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.935626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.945612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.945668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.945684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.945691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.945698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.945717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.955669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.955731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.955745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.955752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.955758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.955772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.965644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.965688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.965701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.965708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.965715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.965730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.975697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.975755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.975768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.975775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.975781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.975795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.985744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.985802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.985815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.985822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.985828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.985842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:08.995763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:08.995815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:08.995828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:08.995839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:08.995845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:08.995859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:09.005716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:09.005772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:09.005785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:09.005792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:09.005798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:09.005813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:09.015841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:09.015946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:09.015959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:09.015967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:09.015973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:09.015987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:09.025820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.771 [2024-12-05 14:19:09.025887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.771 [2024-12-05 14:19:09.025901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.771 [2024-12-05 14:19:09.025908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.771 [2024-12-05 14:19:09.025914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.771 [2024-12-05 14:19:09.025928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.771 qpair failed and we were unable to recover it. 00:29:02.771 [2024-12-05 14:19:09.035841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.772 [2024-12-05 14:19:09.035888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.772 [2024-12-05 14:19:09.035901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.772 [2024-12-05 14:19:09.035908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.772 [2024-12-05 14:19:09.035914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.772 [2024-12-05 14:19:09.035932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.772 qpair failed and we were unable to recover it. 00:29:02.772 [2024-12-05 14:19:09.045832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.772 [2024-12-05 14:19:09.045888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.772 [2024-12-05 14:19:09.045901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.772 [2024-12-05 14:19:09.045908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.772 [2024-12-05 14:19:09.045914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.772 [2024-12-05 14:19:09.045928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.772 qpair failed and we were unable to recover it. 00:29:02.772 [2024-12-05 14:19:09.055909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.772 [2024-12-05 14:19:09.055965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.772 [2024-12-05 14:19:09.055978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.772 [2024-12-05 14:19:09.055984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.772 [2024-12-05 14:19:09.055991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:02.772 [2024-12-05 14:19:09.056004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.772 qpair failed and we were unable to recover it. 00:29:02.772 [2024-12-05 14:19:09.065936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.036 [2024-12-05 14:19:09.065990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.036 [2024-12-05 14:19:09.066004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.036 [2024-12-05 14:19:09.066011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.036 [2024-12-05 14:19:09.066017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.036 [2024-12-05 14:19:09.066031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-12-05 14:19:09.075991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.036 [2024-12-05 14:19:09.076039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.036 [2024-12-05 14:19:09.076052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.036 [2024-12-05 14:19:09.076059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.036 [2024-12-05 14:19:09.076066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.036 [2024-12-05 14:19:09.076080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-12-05 14:19:09.085959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.036 [2024-12-05 14:19:09.086012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.036 [2024-12-05 14:19:09.086026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.036 [2024-12-05 14:19:09.086033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.036 [2024-12-05 14:19:09.086039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.036 [2024-12-05 14:19:09.086054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.036 qpair failed and we were unable to recover it. 00:29:03.036 [2024-12-05 14:19:09.096035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.036 [2024-12-05 14:19:09.096091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.036 [2024-12-05 14:19:09.096104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.036 [2024-12-05 14:19:09.096110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.036 [2024-12-05 14:19:09.096117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.036 [2024-12-05 14:19:09.096132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.105945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.106011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.106024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.106031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.106038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.106052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.116106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.116200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.116213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.116220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.116226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.116240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.126076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.126120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.126136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.126144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.126150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.126164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.136142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.136199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.136213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.136220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.136226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.136240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.146148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.146198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.146211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.146218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.146224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.146239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.156214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.156271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.156284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.156291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.156297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.156311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.166183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.166236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.166249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.166256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.166262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.166280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.176250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.176305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.176319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.176326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.176332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.176346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.186252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.186303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.186316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.186323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.186329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.186344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.196261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.196311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.196325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.196331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.196337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.196352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.206296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.206344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.206357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.206363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.206370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.206384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.216358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.216417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.216430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.216437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.216443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.216461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.037 [2024-12-05 14:19:09.226279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.037 [2024-12-05 14:19:09.226335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.037 [2024-12-05 14:19:09.226348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.037 [2024-12-05 14:19:09.226355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.037 [2024-12-05 14:19:09.226363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.037 [2024-12-05 14:19:09.226378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.037 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.236431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.236537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.236551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.236558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.236564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.236579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.246424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.246475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.246488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.246495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.246501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.246516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.256506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.256566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.256582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.256589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.256595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.256609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.266539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.266852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.266891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.266908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.266915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.266948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.276500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.276555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.276569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.276575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.276582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.276597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.286517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.286571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.286584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.286591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.286597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.286612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.296610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.296669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.296682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.296689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.296699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.296714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.306620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.306687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.306701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.306707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.306714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.306728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.316635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.316691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.316705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.316712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.316718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.316732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.038 [2024-12-05 14:19:09.326643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.038 [2024-12-05 14:19:09.326694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.038 [2024-12-05 14:19:09.326707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.038 [2024-12-05 14:19:09.326713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.038 [2024-12-05 14:19:09.326720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.038 [2024-12-05 14:19:09.326733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.038 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.336741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.336837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.336850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.336857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.336863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.336877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.346746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.346799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.346813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.346820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.346826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.346840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.356764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.356813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.356826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.356833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.356839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.356854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.366719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.366777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.366790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.366797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.366803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.366817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.376708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.376766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.376781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.376788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.376795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.376810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.386855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.386912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.386928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.386935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.386942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.386956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.396895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.396987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.397000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.397007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.397013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.397027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.406859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.406906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.406919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.406926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.406933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.406947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.416925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.416986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.416999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.417006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.417013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.417027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.427016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.427107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.427120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.302 [2024-12-05 14:19:09.427130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.302 [2024-12-05 14:19:09.427137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.302 [2024-12-05 14:19:09.427151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.302 qpair failed and we were unable to recover it. 00:29:03.302 [2024-12-05 14:19:09.436981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.302 [2024-12-05 14:19:09.437040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.302 [2024-12-05 14:19:09.437054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.437061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.437067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.437081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.446968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.447025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.447038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.447045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.447051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.447065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.457036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.457089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.457103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.457109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.457116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.457130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.467037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.467091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.467104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.467110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.467117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.467131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.477121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.477174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.477187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.477194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.477200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.477215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.487033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.487084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.487097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.487104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.487111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.487125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.497142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.497236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.497250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.497257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.497263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.497277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.507145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.507206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.507220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.507227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.507234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.507253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.517192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.517289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.517303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.517310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.517316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.517331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.527176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.527259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.527284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.527293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.527300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.527320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.537246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.537338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.537353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.537360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.537367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.537382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.547295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.547350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.547364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.547370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.547377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.547392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.557307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.557358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.557371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.557382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.303 [2024-12-05 14:19:09.557389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.303 [2024-12-05 14:19:09.557403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.303 qpair failed and we were unable to recover it. 00:29:03.303 [2024-12-05 14:19:09.567263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.303 [2024-12-05 14:19:09.567310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.303 [2024-12-05 14:19:09.567324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.303 [2024-12-05 14:19:09.567331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.304 [2024-12-05 14:19:09.567337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.304 [2024-12-05 14:19:09.567351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.304 qpair failed and we were unable to recover it. 00:29:03.304 [2024-12-05 14:19:09.577360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.304 [2024-12-05 14:19:09.577414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.304 [2024-12-05 14:19:09.577428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.304 [2024-12-05 14:19:09.577434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.304 [2024-12-05 14:19:09.577441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.304 [2024-12-05 14:19:09.577459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.304 qpair failed and we were unable to recover it. 00:29:03.304 [2024-12-05 14:19:09.587365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.304 [2024-12-05 14:19:09.587418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.304 [2024-12-05 14:19:09.587432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.304 [2024-12-05 14:19:09.587439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.304 [2024-12-05 14:19:09.587445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.304 [2024-12-05 14:19:09.587463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.304 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.597421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.597487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.597501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.597509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.597515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.597534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.607420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.607474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.607488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.607495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.607501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.607516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.617494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.617551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.617565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.617572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.617578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.617593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.627502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.627556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.627569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.627576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.627582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.627596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.637528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.637582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.637595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.637602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.637608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.637623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.647497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.647545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.647558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.647565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.647571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.647585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.657556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.657624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.657638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.657644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.657651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.657665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.667590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.667642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.667655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.667662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.667669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.667683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.677639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.677688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.677701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.677709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.677715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.677729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.568 [2024-12-05 14:19:09.687614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.568 [2024-12-05 14:19:09.687669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.568 [2024-12-05 14:19:09.687686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.568 [2024-12-05 14:19:09.687693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.568 [2024-12-05 14:19:09.687699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.568 [2024-12-05 14:19:09.687713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.568 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.697694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.697749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.697763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.697769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.697775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.697790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.707729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.707786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.707799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.707806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.707812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.707826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.717750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.717799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.717812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.717818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.717824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.717838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.727707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.727752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.727766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.727772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.727782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.727796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.737798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.737852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.737865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.737872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.737878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.737892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.747862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.747916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.747929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.747936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.747942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.747956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.757846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.757919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.757932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.757939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.757945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.757959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.767841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.767883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.767897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.767903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.767909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.767923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.777881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.777938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.777951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.777958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.777964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.777978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.787944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.787995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.788008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.788015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.788021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.788035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.797959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.798010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.798023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.798030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.798036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.798050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.807927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.807974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.807987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.807994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.808000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.808014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.818023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.569 [2024-12-05 14:19:09.818081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.569 [2024-12-05 14:19:09.818101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.569 [2024-12-05 14:19:09.818108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.569 [2024-12-05 14:19:09.818114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.569 [2024-12-05 14:19:09.818128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.569 qpair failed and we were unable to recover it. 00:29:03.569 [2024-12-05 14:19:09.828047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.570 [2024-12-05 14:19:09.828105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.570 [2024-12-05 14:19:09.828118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.570 [2024-12-05 14:19:09.828125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.570 [2024-12-05 14:19:09.828131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.570 [2024-12-05 14:19:09.828145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.570 qpair failed and we were unable to recover it. 00:29:03.570 [2024-12-05 14:19:09.838067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.570 [2024-12-05 14:19:09.838125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.570 [2024-12-05 14:19:09.838138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.570 [2024-12-05 14:19:09.838145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.570 [2024-12-05 14:19:09.838151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.570 [2024-12-05 14:19:09.838166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.570 qpair failed and we were unable to recover it. 00:29:03.570 [2024-12-05 14:19:09.847920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.570 [2024-12-05 14:19:09.847965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.570 [2024-12-05 14:19:09.847978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.570 [2024-12-05 14:19:09.847984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.570 [2024-12-05 14:19:09.847991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.570 [2024-12-05 14:19:09.848005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.570 qpair failed and we were unable to recover it. 00:29:03.570 [2024-12-05 14:19:09.858121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.570 [2024-12-05 14:19:09.858182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.570 [2024-12-05 14:19:09.858197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.570 [2024-12-05 14:19:09.858204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.570 [2024-12-05 14:19:09.858216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.570 [2024-12-05 14:19:09.858234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.570 qpair failed and we were unable to recover it. 00:29:03.832 [2024-12-05 14:19:09.868083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.832 [2024-12-05 14:19:09.868138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.832 [2024-12-05 14:19:09.868152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.832 [2024-12-05 14:19:09.868160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.832 [2024-12-05 14:19:09.868166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.832 [2024-12-05 14:19:09.868181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.832 qpair failed and we were unable to recover it. 00:29:03.832 [2024-12-05 14:19:09.878171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.832 [2024-12-05 14:19:09.878222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.832 [2024-12-05 14:19:09.878236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.832 [2024-12-05 14:19:09.878243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.832 [2024-12-05 14:19:09.878249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.832 [2024-12-05 14:19:09.878264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.832 qpair failed and we were unable to recover it. 00:29:03.832 [2024-12-05 14:19:09.888152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.832 [2024-12-05 14:19:09.888195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.832 [2024-12-05 14:19:09.888209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.832 [2024-12-05 14:19:09.888216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.832 [2024-12-05 14:19:09.888222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.832 [2024-12-05 14:19:09.888237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.832 qpair failed and we were unable to recover it. 00:29:03.832 [2024-12-05 14:19:09.898222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.832 [2024-12-05 14:19:09.898285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.832 [2024-12-05 14:19:09.898298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.898305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.898311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.898326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.908271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.908373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.908386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.908393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.908400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.908414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.918279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.918338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.918352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.918358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.918365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.918379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.928259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.928318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.928331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.928338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.928345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.928359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.938328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.938384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.938397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.938404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.938410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.938424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.948375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.948434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.948451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.948462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.948468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.948482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.958391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.958441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.958458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.958465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.958471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.958485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.968407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.968493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.968507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.968514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.968520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.968534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.978450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.978512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.978525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.978532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.978538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.978552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.988499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.988606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.988620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.988629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.988636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.988651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:09.998505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:09.998564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:09.998577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:09.998584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:09.998590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:09.998604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:10.008477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:10.008559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:10.008574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:10.008581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:10.008588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:10.008603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:10.018526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:10.018583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:10.018597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:10.018604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:10.018610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:10.018624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:10.028610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:10.028718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:10.028732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:10.028739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:10.028745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:10.028760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:10.038603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:10.038654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:10.038667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:10.038674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.833 [2024-12-05 14:19:10.038680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.833 [2024-12-05 14:19:10.038695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.833 qpair failed and we were unable to recover it. 00:29:03.833 [2024-12-05 14:19:10.048514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.833 [2024-12-05 14:19:10.048574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.833 [2024-12-05 14:19:10.048587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.833 [2024-12-05 14:19:10.048594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.048601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.048616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.058692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.058744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.058757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.058764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.058770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.058785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.068724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.068818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.068831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.068838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.068845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.068859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.078747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.078803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.078817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.078824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.078830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.078845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.088588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.088633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.088646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.088653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.088660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.088674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.098791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.098857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.098870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.098877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.098884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.098898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.108828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.108884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.108896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.108904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.108912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.108927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.118858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.834 [2024-12-05 14:19:10.118953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.834 [2024-12-05 14:19:10.118966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.834 [2024-12-05 14:19:10.118976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.834 [2024-12-05 14:19:10.118983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:03.834 [2024-12-05 14:19:10.118997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:03.834 qpair failed and we were unable to recover it. 00:29:03.834 [2024-12-05 14:19:10.128849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.128896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.128911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.128919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.128926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.128940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.138894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.138948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.138961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.138968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.138975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.138989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.148940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.149021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.149034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.149042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.149048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.149062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.158952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.159002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.159015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.159023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.159029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.159047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.168920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.168969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.168982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.168989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.168995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.169009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.179014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.179069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.179082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.179089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.179095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.179111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.189047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.189099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.189113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.189120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.189127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.189141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.199051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.199102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.199117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.199124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.199131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.199150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.209047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.209104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.209117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.209124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.209130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.209145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.219131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.219187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.219200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.219207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.219213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.219227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.229152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.229213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.229238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.097 [2024-12-05 14:19:10.229247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.097 [2024-12-05 14:19:10.229254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.097 [2024-12-05 14:19:10.229275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.097 qpair failed and we were unable to recover it. 00:29:04.097 [2024-12-05 14:19:10.239182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.097 [2024-12-05 14:19:10.239241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.097 [2024-12-05 14:19:10.239256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.239263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.239270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.239285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.249192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.249242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.249261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.249269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.249275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.249290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.259208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.259272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.259285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.259292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.259298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.259313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.269295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.269361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.269374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.269381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.269387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.269402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.279267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.279381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.279395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.279402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.279409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.279423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.289146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.289193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.289209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.289216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.289227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.289242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.299209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.299261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.299275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.299282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.299288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.299303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.309393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.309507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.309521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.309528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.309534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.309549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.319380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.319431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.319445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.319452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.319463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.319483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.329378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.329424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.329437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.329444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.329451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.329471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.339453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.339518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.339532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.339539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.339545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.339559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.349481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.349533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.349546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.349553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.349559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.349574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.359498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.359556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.359569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.098 [2024-12-05 14:19:10.359576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.098 [2024-12-05 14:19:10.359582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.098 [2024-12-05 14:19:10.359597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.098 qpair failed and we were unable to recover it. 00:29:04.098 [2024-12-05 14:19:10.369487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.098 [2024-12-05 14:19:10.369547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.098 [2024-12-05 14:19:10.369560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.099 [2024-12-05 14:19:10.369567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.099 [2024-12-05 14:19:10.369573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.099 [2024-12-05 14:19:10.369588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.099 qpair failed and we were unable to recover it. 00:29:04.099 [2024-12-05 14:19:10.379522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.099 [2024-12-05 14:19:10.379576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.099 [2024-12-05 14:19:10.379592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.099 [2024-12-05 14:19:10.379599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.099 [2024-12-05 14:19:10.379605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.099 [2024-12-05 14:19:10.379620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.099 qpair failed and we were unable to recover it. 00:29:04.099 [2024-12-05 14:19:10.389597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.099 [2024-12-05 14:19:10.389656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.099 [2024-12-05 14:19:10.389669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.099 [2024-12-05 14:19:10.389676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.099 [2024-12-05 14:19:10.389682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.099 [2024-12-05 14:19:10.389697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.099 qpair failed and we were unable to recover it. 00:29:04.362 [2024-12-05 14:19:10.399636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.362 [2024-12-05 14:19:10.399688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.362 [2024-12-05 14:19:10.399701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.362 [2024-12-05 14:19:10.399708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.362 [2024-12-05 14:19:10.399714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.362 [2024-12-05 14:19:10.399728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.362 qpair failed and we were unable to recover it. 00:29:04.362 [2024-12-05 14:19:10.409592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.362 [2024-12-05 14:19:10.409638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.362 [2024-12-05 14:19:10.409651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.362 [2024-12-05 14:19:10.409657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.362 [2024-12-05 14:19:10.409664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.362 [2024-12-05 14:19:10.409678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.362 qpair failed and we were unable to recover it. 00:29:04.362 [2024-12-05 14:19:10.419681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.362 [2024-12-05 14:19:10.419738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.362 [2024-12-05 14:19:10.419751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.362 [2024-12-05 14:19:10.419758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.362 [2024-12-05 14:19:10.419767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.362 [2024-12-05 14:19:10.419782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.362 qpair failed and we were unable to recover it. 00:29:04.362 [2024-12-05 14:19:10.429692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.362 [2024-12-05 14:19:10.429776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.362 [2024-12-05 14:19:10.429789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.362 [2024-12-05 14:19:10.429796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.362 [2024-12-05 14:19:10.429802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.362 [2024-12-05 14:19:10.429816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.362 qpair failed and we were unable to recover it. 00:29:04.362 [2024-12-05 14:19:10.439731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.362 [2024-12-05 14:19:10.439788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.362 [2024-12-05 14:19:10.439802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.362 [2024-12-05 14:19:10.439809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.362 [2024-12-05 14:19:10.439815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.362 [2024-12-05 14:19:10.439834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.362 qpair failed and we were unable to recover it. 00:29:04.362 [2024-12-05 14:19:10.449701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.362 [2024-12-05 14:19:10.449749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.362 [2024-12-05 14:19:10.449762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.362 [2024-12-05 14:19:10.449769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.362 [2024-12-05 14:19:10.449775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.362 [2024-12-05 14:19:10.449790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.459806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.459886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.459900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.459907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.459913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.459928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.469797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.469848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.469861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.469868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.469874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.469889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.479818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.479947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.479961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.479967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.479974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.479988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.489818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.489873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.489886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.489893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.489900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.489914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.499883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.499940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.499953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.499960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.499966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.499980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.509920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.509974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.509990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.509997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.510003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.510018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.519933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.519988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.520001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.520008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.520014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.520028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.529920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.529971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.529984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.529991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.529997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.530011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.539989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.540046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.540059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.540066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.540072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.540086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.363 [2024-12-05 14:19:10.549907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.363 [2024-12-05 14:19:10.549975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.363 [2024-12-05 14:19:10.549988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.363 [2024-12-05 14:19:10.549998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.363 [2024-12-05 14:19:10.550004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.363 [2024-12-05 14:19:10.550018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.363 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.560022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.560076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.560089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.560096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.560103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.560117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.570011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.570062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.570075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.570082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.570088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.570102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.580109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.580162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.580175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.580182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.580189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.580203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.590089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.590147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.590160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.590167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.590174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.590188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.600150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.600199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.600212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.600219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.600225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.600240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.610146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.610194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.610207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.610215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.610222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.610236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.620222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.620287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.620312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.620320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.620327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.620347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.630169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.630224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.630239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.630246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.630252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.630268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.640246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.640306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.640321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.640328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.640334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.640349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.364 [2024-12-05 14:19:10.650231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.364 [2024-12-05 14:19:10.650279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.364 [2024-12-05 14:19:10.650293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.364 [2024-12-05 14:19:10.650300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.364 [2024-12-05 14:19:10.650306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.364 [2024-12-05 14:19:10.650321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.364 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.660329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.660385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.660399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.660406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.660412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.660427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.670370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.670425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.670439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.670446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.670453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.670473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.680359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.680460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.680474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.680486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.680492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.680507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.690231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.690279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.690293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.690300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.690307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.690322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.700304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.700368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.700381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.700388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.700394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.700408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.710495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.710571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.710585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.710592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.710599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.710613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.720436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.720486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.720500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.720507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.720513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.720531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.730447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.730501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.730514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.730521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.730528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.730542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.740639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.740703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.740716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.740723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.740729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.740743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.750599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.628 [2024-12-05 14:19:10.750656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.628 [2024-12-05 14:19:10.750669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.628 [2024-12-05 14:19:10.750676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.628 [2024-12-05 14:19:10.750682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.628 [2024-12-05 14:19:10.750696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.628 qpair failed and we were unable to recover it. 00:29:04.628 [2024-12-05 14:19:10.760549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.760602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.760615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.760621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.760628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.760642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.770607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.770656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.770669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.770676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.770682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.770696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.780651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.780708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.780721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.780728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.780734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.780749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.790710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.790783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.790797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.790804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.790810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.790825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.800672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.800720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.800733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.800740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.800746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.800761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.810605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.810648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.810665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.810672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.810678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.810692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.820741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.820843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.820856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.820863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.820869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.820884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.830669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.830723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.830736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.830743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.830749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.830764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.840762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.840807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.840821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.840828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.840834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.840848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.850804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.850891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.850904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.850911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.850920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.850935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.860851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.860909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.860922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.860929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.860935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.860949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.870865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.870918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.870931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.870938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.870944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.870958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.880861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.880905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.629 [2024-12-05 14:19:10.880918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.629 [2024-12-05 14:19:10.880925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.629 [2024-12-05 14:19:10.880931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.629 [2024-12-05 14:19:10.880946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.629 qpair failed and we were unable to recover it. 00:29:04.629 [2024-12-05 14:19:10.890874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.629 [2024-12-05 14:19:10.890919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.630 [2024-12-05 14:19:10.890932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.630 [2024-12-05 14:19:10.890939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.630 [2024-12-05 14:19:10.890946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.630 [2024-12-05 14:19:10.890960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.630 qpair failed and we were unable to recover it. 00:29:04.630 [2024-12-05 14:19:10.901012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.630 [2024-12-05 14:19:10.901109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.630 [2024-12-05 14:19:10.901122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.630 [2024-12-05 14:19:10.901131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.630 [2024-12-05 14:19:10.901138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.630 [2024-12-05 14:19:10.901152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.630 qpair failed and we were unable to recover it. 00:29:04.630 [2024-12-05 14:19:10.910989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.630 [2024-12-05 14:19:10.911044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.630 [2024-12-05 14:19:10.911059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.630 [2024-12-05 14:19:10.911067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.630 [2024-12-05 14:19:10.911073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.630 [2024-12-05 14:19:10.911091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.630 qpair failed and we were unable to recover it. 00:29:04.630 [2024-12-05 14:19:10.920965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.630 [2024-12-05 14:19:10.921020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.630 [2024-12-05 14:19:10.921034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.630 [2024-12-05 14:19:10.921041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.630 [2024-12-05 14:19:10.921047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.630 [2024-12-05 14:19:10.921062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.630 qpair failed and we were unable to recover it. 00:29:04.892 [2024-12-05 14:19:10.931006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.892 [2024-12-05 14:19:10.931058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.892 [2024-12-05 14:19:10.931071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.892 [2024-12-05 14:19:10.931078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.892 [2024-12-05 14:19:10.931085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.892 [2024-12-05 14:19:10.931099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.892 qpair failed and we were unable to recover it. 00:29:04.892 [2024-12-05 14:19:10.941047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.892 [2024-12-05 14:19:10.941104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.892 [2024-12-05 14:19:10.941120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.892 [2024-12-05 14:19:10.941127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.892 [2024-12-05 14:19:10.941133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.892 [2024-12-05 14:19:10.941148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.892 qpair failed and we were unable to recover it. 00:29:04.892 [2024-12-05 14:19:10.951067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.892 [2024-12-05 14:19:10.951116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.892 [2024-12-05 14:19:10.951129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.892 [2024-12-05 14:19:10.951136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.892 [2024-12-05 14:19:10.951142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.892 [2024-12-05 14:19:10.951156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.892 qpair failed and we were unable to recover it. 00:29:04.892 [2024-12-05 14:19:10.961061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.892 [2024-12-05 14:19:10.961104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.892 [2024-12-05 14:19:10.961118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.892 [2024-12-05 14:19:10.961124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.892 [2024-12-05 14:19:10.961130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.892 [2024-12-05 14:19:10.961145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.892 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:10.971120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:10.971220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:10.971245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:10.971254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:10.971260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:10.971280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:10.981151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:10.981216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:10.981241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:10.981250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:10.981261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:10.981281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:10.991188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:10.991249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:10.991274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:10.991283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:10.991290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:10.991309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.001203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.001253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.001268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.001275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.001282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:11.001297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.011216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.011264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.011278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.011285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.011291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:11.011306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.021286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.021340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.021353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.021360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.021366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:11.021381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.031312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.031372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.031386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.031393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.031399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:11.031414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.041294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.041342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.041355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.041361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.041368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:11.041382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.051309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.051358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.051372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.051379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.051385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.893 [2024-12-05 14:19:11.051399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.893 qpair failed and we were unable to recover it. 00:29:04.893 [2024-12-05 14:19:11.061431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.893 [2024-12-05 14:19:11.061491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.893 [2024-12-05 14:19:11.061505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.893 [2024-12-05 14:19:11.061512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.893 [2024-12-05 14:19:11.061519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.061533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.071474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.071553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.071569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.071577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.071583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.071597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.081414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.081466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.081480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.081487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.081493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.081507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.091432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.091482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.091496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.091502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.091509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.091523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.101504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.101587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.101599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.101606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.101612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.101627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.111498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.111605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.111619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.111630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.111637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.111651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.121569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.121635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.121649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.121656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.121662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.121676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.131559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.131604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.131617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.131623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.131630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.131644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.141596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.141654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.141667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.141674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.141680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.141694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.151618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.151668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.151682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.151689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.151695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.151714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.894 [2024-12-05 14:19:11.161623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.894 [2024-12-05 14:19:11.161718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.894 [2024-12-05 14:19:11.161734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.894 [2024-12-05 14:19:11.161741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.894 [2024-12-05 14:19:11.161747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.894 [2024-12-05 14:19:11.161765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.894 qpair failed and we were unable to recover it. 00:29:04.895 [2024-12-05 14:19:11.171675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.895 [2024-12-05 14:19:11.171723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.895 [2024-12-05 14:19:11.171737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.895 [2024-12-05 14:19:11.171744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.895 [2024-12-05 14:19:11.171750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.895 [2024-12-05 14:19:11.171765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.895 qpair failed and we were unable to recover it. 00:29:04.895 [2024-12-05 14:19:11.181776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.895 [2024-12-05 14:19:11.181831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.895 [2024-12-05 14:19:11.181845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.895 [2024-12-05 14:19:11.181852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.895 [2024-12-05 14:19:11.181859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:04.895 [2024-12-05 14:19:11.181873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.895 qpair failed and we were unable to recover it. 00:29:05.157 [2024-12-05 14:19:11.191700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.157 [2024-12-05 14:19:11.191754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.157 [2024-12-05 14:19:11.191768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.157 [2024-12-05 14:19:11.191774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.157 [2024-12-05 14:19:11.191781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.157 [2024-12-05 14:19:11.191795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.157 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.201699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.201749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.201762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.201769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.201775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.201790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.211770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.211817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.211831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.211838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.211844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.211858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.221839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.221894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.221908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.221914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.221920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.221935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.231856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.231903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.231917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.231923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.231929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.231943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.241857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.241900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.241914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.241925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.241931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.241947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.251897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.251945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.251959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.251966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.251972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.251987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.261948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.262001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.262014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.262021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.262027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.262041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.271940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.272032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.272046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.272053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.272059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.272073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.281961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.282011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.282024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.282031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.158 [2024-12-05 14:19:11.282037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.158 [2024-12-05 14:19:11.282056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.158 qpair failed and we were unable to recover it. 00:29:05.158 [2024-12-05 14:19:11.292054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.158 [2024-12-05 14:19:11.292127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.158 [2024-12-05 14:19:11.292141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.158 [2024-12-05 14:19:11.292148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.292154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.292168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.301950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.302003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.302017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.302024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.302030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.302045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.312053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.312117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.312130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.312137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.312143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.312158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.322076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.322131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.322144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.322151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.322157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.322171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.332066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.332115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.332136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.332143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.332149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.332170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.342175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.342226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.342240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.342247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.342254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.342269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.352165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.352213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.352226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.352233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.352239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.352254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.362181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.362228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.362242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.362250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.362256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.362271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.372240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.372322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.372338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.372346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.372352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.372366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.382278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.382333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.382346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.382353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.382359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.159 [2024-12-05 14:19:11.382373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.159 qpair failed and we were unable to recover it. 00:29:05.159 [2024-12-05 14:19:11.392255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.159 [2024-12-05 14:19:11.392348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.159 [2024-12-05 14:19:11.392362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.159 [2024-12-05 14:19:11.392369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.159 [2024-12-05 14:19:11.392375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.392389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.160 [2024-12-05 14:19:11.402277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.160 [2024-12-05 14:19:11.402370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.160 [2024-12-05 14:19:11.402383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.160 [2024-12-05 14:19:11.402389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.160 [2024-12-05 14:19:11.402396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.402410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.160 [2024-12-05 14:19:11.412315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.160 [2024-12-05 14:19:11.412366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.160 [2024-12-05 14:19:11.412380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.160 [2024-12-05 14:19:11.412387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.160 [2024-12-05 14:19:11.412396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.412411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.160 [2024-12-05 14:19:11.422372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.160 [2024-12-05 14:19:11.422423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.160 [2024-12-05 14:19:11.422436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.160 [2024-12-05 14:19:11.422443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.160 [2024-12-05 14:19:11.422449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.422467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.160 [2024-12-05 14:19:11.432429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.160 [2024-12-05 14:19:11.432474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.160 [2024-12-05 14:19:11.432487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.160 [2024-12-05 14:19:11.432494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.160 [2024-12-05 14:19:11.432500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.432515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.160 [2024-12-05 14:19:11.442365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.160 [2024-12-05 14:19:11.442413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.160 [2024-12-05 14:19:11.442426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.160 [2024-12-05 14:19:11.442433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.160 [2024-12-05 14:19:11.442439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.442457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.160 [2024-12-05 14:19:11.452423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.160 [2024-12-05 14:19:11.452508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.160 [2024-12-05 14:19:11.452521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.160 [2024-12-05 14:19:11.452528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.160 [2024-12-05 14:19:11.452534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.160 [2024-12-05 14:19:11.452549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.160 qpair failed and we were unable to recover it. 00:29:05.423 [2024-12-05 14:19:11.462520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.423 [2024-12-05 14:19:11.462614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.423 [2024-12-05 14:19:11.462628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.423 [2024-12-05 14:19:11.462635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.423 [2024-12-05 14:19:11.462641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.423 [2024-12-05 14:19:11.462656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.423 qpair failed and we were unable to recover it. 00:29:05.423 [2024-12-05 14:19:11.472466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.423 [2024-12-05 14:19:11.472516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.423 [2024-12-05 14:19:11.472529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.423 [2024-12-05 14:19:11.472536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.472543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.472557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.482380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.482428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.482441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.482448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.482457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.482472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.492541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.492597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.492610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.492617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.492623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.492637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.502608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.502705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.502722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.502728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.502735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.502749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.512587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.512633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.512646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.512653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.512659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.512673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.522639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.522685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.522698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.522704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.522710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.522725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.532660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.532734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.532747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.532754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.532760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.532774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.542732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.542787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.542800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.542807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.542817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.542832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.552714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.552773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.552786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.552793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.552799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.552813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.562770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.562885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.562898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.562905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.562912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.424 [2024-12-05 14:19:11.562925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.424 qpair failed and we were unable to recover it. 00:29:05.424 [2024-12-05 14:19:11.572751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.424 [2024-12-05 14:19:11.572801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.424 [2024-12-05 14:19:11.572815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.424 [2024-12-05 14:19:11.572821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.424 [2024-12-05 14:19:11.572828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.572842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.582834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.582912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.582925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.582932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.582938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.582952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.592815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.592875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.592889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.592895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.592901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.592916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.602804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.602875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.602889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.602896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.602902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.602916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.612853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.612899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.612912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.612920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.612926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.612940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.622931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.622986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.623000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.623007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.623013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.623027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.632935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.632987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.633004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.633010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.633017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.633031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.642951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.643021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.643033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.643040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.643047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.643061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.652976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.653041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.653054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.653061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.653067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.653081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.663032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.663085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.663097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.663104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.663110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.663125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.673080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.425 [2024-12-05 14:19:11.673161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.425 [2024-12-05 14:19:11.673174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.425 [2024-12-05 14:19:11.673184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.425 [2024-12-05 14:19:11.673190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.425 [2024-12-05 14:19:11.673205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.425 qpair failed and we were unable to recover it. 00:29:05.425 [2024-12-05 14:19:11.683045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.426 [2024-12-05 14:19:11.683108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.426 [2024-12-05 14:19:11.683133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.426 [2024-12-05 14:19:11.683141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.426 [2024-12-05 14:19:11.683148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.426 [2024-12-05 14:19:11.683168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-12-05 14:19:11.693030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.426 [2024-12-05 14:19:11.693080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.426 [2024-12-05 14:19:11.693095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.426 [2024-12-05 14:19:11.693102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.426 [2024-12-05 14:19:11.693109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.426 [2024-12-05 14:19:11.693124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-12-05 14:19:11.703141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.426 [2024-12-05 14:19:11.703196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.426 [2024-12-05 14:19:11.703210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.426 [2024-12-05 14:19:11.703217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.426 [2024-12-05 14:19:11.703223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.426 [2024-12-05 14:19:11.703238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.426 [2024-12-05 14:19:11.713132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.426 [2024-12-05 14:19:11.713182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.426 [2024-12-05 14:19:11.713195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.426 [2024-12-05 14:19:11.713203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.426 [2024-12-05 14:19:11.713209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.426 [2024-12-05 14:19:11.713232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.426 qpair failed and we were unable to recover it. 00:29:05.688 [2024-12-05 14:19:11.723149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.688 [2024-12-05 14:19:11.723196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.688 [2024-12-05 14:19:11.723209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.688 [2024-12-05 14:19:11.723217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.688 [2024-12-05 14:19:11.723223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.688 [2024-12-05 14:19:11.723237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-12-05 14:19:11.733190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.688 [2024-12-05 14:19:11.733238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.688 [2024-12-05 14:19:11.733251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.688 [2024-12-05 14:19:11.733258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.688 [2024-12-05 14:19:11.733264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.688 [2024-12-05 14:19:11.733279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-12-05 14:19:11.743224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.688 [2024-12-05 14:19:11.743285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.688 [2024-12-05 14:19:11.743298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.688 [2024-12-05 14:19:11.743305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.688 [2024-12-05 14:19:11.743311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.688 [2024-12-05 14:19:11.743326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-12-05 14:19:11.753267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.688 [2024-12-05 14:19:11.753313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.688 [2024-12-05 14:19:11.753326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.688 [2024-12-05 14:19:11.753333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.688 [2024-12-05 14:19:11.753339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.688 [2024-12-05 14:19:11.753353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-12-05 14:19:11.763267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.688 [2024-12-05 14:19:11.763319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.688 [2024-12-05 14:19:11.763333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.688 [2024-12-05 14:19:11.763339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.688 [2024-12-05 14:19:11.763346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.688 [2024-12-05 14:19:11.763360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.688 qpair failed and we were unable to recover it. 00:29:05.688 [2024-12-05 14:19:11.773276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.688 [2024-12-05 14:19:11.773325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.688 [2024-12-05 14:19:11.773338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.773345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.773351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.773365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.783360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.783415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.783429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.783436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.783442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.783460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.793332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.793427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.793440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.793447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.793456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.793471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.803365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.803433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.803446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.803460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.803466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.803481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.813397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.813449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.813467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.813474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.813481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.813495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.823475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.823576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.823589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.823596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.823603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.823617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.833467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.833532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.833545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.833552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.833558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.833573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.843473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.843534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.843547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.843554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.843560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.843578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.853529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.853574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.853587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.853594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.853600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.853615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.863557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.863615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.863628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.863636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.863643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.863658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.873578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.873632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.873646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.873652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.873659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.873673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.883648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.883701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.883714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.883720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.883727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.883741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.893585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.893634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.689 [2024-12-05 14:19:11.893647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.689 [2024-12-05 14:19:11.893654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.689 [2024-12-05 14:19:11.893660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.689 [2024-12-05 14:19:11.893675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.689 qpair failed and we were unable to recover it. 00:29:05.689 [2024-12-05 14:19:11.903696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.689 [2024-12-05 14:19:11.903754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.903767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.903774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.903780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.903794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.913688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.913736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.913749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.913755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.913762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.913776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.923692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.923749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.923763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.923770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.923776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.923790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.933726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.933773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.933789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.933796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.933803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.933817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.943797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.943848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.943861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.943867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.943874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.943888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.953791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.953836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.953849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.953856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.953862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.953877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.963802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.963856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.963869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.963876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.963882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.963896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.973845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.973914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.973926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.973933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.973943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.973957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.690 [2024-12-05 14:19:11.983901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.690 [2024-12-05 14:19:11.983953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.690 [2024-12-05 14:19:11.983966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.690 [2024-12-05 14:19:11.983973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.690 [2024-12-05 14:19:11.983979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.690 [2024-12-05 14:19:11.983993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.690 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:11.993891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:11.993969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:11.993982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:11.993989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:11.993996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:11.994010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.003865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.003907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.003921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:12.003928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:12.003934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:12.003949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.013945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.014013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.014027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:12.014034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:12.014041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:12.014055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.024009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.024109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.024122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:12.024129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:12.024136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:12.024151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.033976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.034024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.034038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:12.034045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:12.034051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:12.034065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.044017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.044083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.044096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:12.044103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:12.044110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:12.044124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.053971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.054019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.054032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.953 [2024-12-05 14:19:12.054039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.953 [2024-12-05 14:19:12.054045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.953 [2024-12-05 14:19:12.054059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.953 qpair failed and we were unable to recover it. 00:29:05.953 [2024-12-05 14:19:12.064121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.953 [2024-12-05 14:19:12.064172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.953 [2024-12-05 14:19:12.064188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.064195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.064201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.064215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.074125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.074174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.074187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.074195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.074201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.074215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.084005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.084052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.084066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.084072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.084079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.084093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.094168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.094215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.094228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.094235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.094241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.094256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.104238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.104295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.104320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.104329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.104340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.104360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.114227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.114286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.114312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.114321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.114328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.114347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.124252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.124304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.124319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.124327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.124333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.124349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.134150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.134198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.134211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.134218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.134224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.134239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.144338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.144389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.144402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.144409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.144415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.144430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.154320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.154372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.154385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.154392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.154398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.154413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.164294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.164342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.164355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.164362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.164368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.164383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.174353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.174405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.174418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.174425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.174432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.174446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.184436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.184518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.184533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.184540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.954 [2024-12-05 14:19:12.184546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.954 [2024-12-05 14:19:12.184561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.954 qpair failed and we were unable to recover it. 00:29:05.954 [2024-12-05 14:19:12.194443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.954 [2024-12-05 14:19:12.194502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.954 [2024-12-05 14:19:12.194519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.954 [2024-12-05 14:19:12.194526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.955 [2024-12-05 14:19:12.194532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.955 [2024-12-05 14:19:12.194547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.955 qpair failed and we were unable to recover it. 00:29:05.955 [2024-12-05 14:19:12.204459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.955 [2024-12-05 14:19:12.204508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.955 [2024-12-05 14:19:12.204522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.955 [2024-12-05 14:19:12.204529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.955 [2024-12-05 14:19:12.204535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.955 [2024-12-05 14:19:12.204549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.955 qpair failed and we were unable to recover it. 00:29:05.955 [2024-12-05 14:19:12.214476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.955 [2024-12-05 14:19:12.214525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.955 [2024-12-05 14:19:12.214539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.955 [2024-12-05 14:19:12.214546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.955 [2024-12-05 14:19:12.214552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.955 [2024-12-05 14:19:12.214566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.955 qpair failed and we were unable to recover it. 00:29:05.955 [2024-12-05 14:19:12.224523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.955 [2024-12-05 14:19:12.224577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.955 [2024-12-05 14:19:12.224590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.955 [2024-12-05 14:19:12.224596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.955 [2024-12-05 14:19:12.224603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.955 [2024-12-05 14:19:12.224617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.955 qpair failed and we were unable to recover it. 00:29:05.955 [2024-12-05 14:19:12.234437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.955 [2024-12-05 14:19:12.234490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.955 [2024-12-05 14:19:12.234505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.955 [2024-12-05 14:19:12.234516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.955 [2024-12-05 14:19:12.234522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.955 [2024-12-05 14:19:12.234537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.955 qpair failed and we were unable to recover it. 00:29:05.955 [2024-12-05 14:19:12.244557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:05.955 [2024-12-05 14:19:12.244607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:05.955 [2024-12-05 14:19:12.244622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:05.955 [2024-12-05 14:19:12.244629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:05.955 [2024-12-05 14:19:12.244635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:05.955 [2024-12-05 14:19:12.244650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:05.955 qpair failed and we were unable to recover it. 00:29:06.217 [2024-12-05 14:19:12.254554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.217 [2024-12-05 14:19:12.254644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.217 [2024-12-05 14:19:12.254657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.217 [2024-12-05 14:19:12.254664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.217 [2024-12-05 14:19:12.254670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.217 [2024-12-05 14:19:12.254685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.217 qpair failed and we were unable to recover it. 00:29:06.217 [2024-12-05 14:19:12.264644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.217 [2024-12-05 14:19:12.264699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.217 [2024-12-05 14:19:12.264712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.217 [2024-12-05 14:19:12.264720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.217 [2024-12-05 14:19:12.264726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.217 [2024-12-05 14:19:12.264740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.217 qpair failed and we were unable to recover it. 00:29:06.217 [2024-12-05 14:19:12.274640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.217 [2024-12-05 14:19:12.274735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.217 [2024-12-05 14:19:12.274750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.217 [2024-12-05 14:19:12.274757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.217 [2024-12-05 14:19:12.274763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.217 [2024-12-05 14:19:12.274784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.217 qpair failed and we were unable to recover it. 00:29:06.217 [2024-12-05 14:19:12.284657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.217 [2024-12-05 14:19:12.284717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.217 [2024-12-05 14:19:12.284731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.217 [2024-12-05 14:19:12.284739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.217 [2024-12-05 14:19:12.284745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.217 [2024-12-05 14:19:12.284760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.217 qpair failed and we were unable to recover it. 00:29:06.217 [2024-12-05 14:19:12.294687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.217 [2024-12-05 14:19:12.294737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.217 [2024-12-05 14:19:12.294751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.217 [2024-12-05 14:19:12.294757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.217 [2024-12-05 14:19:12.294763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.217 [2024-12-05 14:19:12.294778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.217 qpair failed and we were unable to recover it. 00:29:06.217 [2024-12-05 14:19:12.304796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.217 [2024-12-05 14:19:12.304853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.217 [2024-12-05 14:19:12.304866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.217 [2024-12-05 14:19:12.304873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.217 [2024-12-05 14:19:12.304879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.217 [2024-12-05 14:19:12.304893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.217 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.314765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.314819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.314832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.314839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.314845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.314860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.324756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.324847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.324860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.324867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.324873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.324887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.334780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.334828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.334841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.334849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.334855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.334869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.344866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.344919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.344932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.344939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.344945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.344959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.354848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.354900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.354914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.354920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.354926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.354941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.364878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.364921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.364935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.364947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.364953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.364967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.374917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.374982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.374995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.375002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.375008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.375022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.384985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.385044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.385058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.385065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.385071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.385086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.394903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.394958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.394971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.394978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.394984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.394999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.404996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.405077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.405090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.405097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.405103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.405121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.414987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.415035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.415048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.415055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.415061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.415075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.425074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.425154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.425167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.425174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.425180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.425195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.218 [2024-12-05 14:19:12.435044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.218 [2024-12-05 14:19:12.435096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.218 [2024-12-05 14:19:12.435109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.218 [2024-12-05 14:19:12.435116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.218 [2024-12-05 14:19:12.435122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.218 [2024-12-05 14:19:12.435136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.218 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.445097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.445144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.445157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.445164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.445170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.445184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.455122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.455208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.455233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.455242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.455249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.455268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.465112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.465211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.465236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.465245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.465252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.465271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.475166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.475217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.475232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.475240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.475246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.475262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.485198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.485245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.485259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.485266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.485272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.485287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.495218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.495293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.495310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.495318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.495325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.495340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.219 [2024-12-05 14:19:12.505286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.219 [2024-12-05 14:19:12.505345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.219 [2024-12-05 14:19:12.505359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.219 [2024-12-05 14:19:12.505366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.219 [2024-12-05 14:19:12.505372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.219 [2024-12-05 14:19:12.505386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.219 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.515297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.515346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.515359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.515366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.515372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.515387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.525308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.525357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.525371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.525378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.525385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.525401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.535337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.535381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.535395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.535402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.535412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.535427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.545376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.545472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.545486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.545493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.545499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.545514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.555402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.555450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.555467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.555474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.555480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.555495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.565403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.565447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.565464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.565471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.565477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.565492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.575439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.575532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.575545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.575552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.482 [2024-12-05 14:19:12.575558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.482 [2024-12-05 14:19:12.575573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.482 qpair failed and we were unable to recover it. 00:29:06.482 [2024-12-05 14:19:12.585503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.482 [2024-12-05 14:19:12.585562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.482 [2024-12-05 14:19:12.585575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.482 [2024-12-05 14:19:12.585582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.585588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.585602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.595521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.595576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.595589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.595596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.595602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.595616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.605581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.605654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.605667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.605674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.605680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.605695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.615526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.615573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.615586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.615593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.615599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.615614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.625629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.625682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.625698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.625705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.625711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.625726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.635505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.635552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.635565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.635572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.635578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.635593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.645631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.645687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.645700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.645707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.645713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.645727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.655733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.655800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.655813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.655820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.655826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.655840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.665718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.665801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.665815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.665821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.665831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.665845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.675744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.675795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.675808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.675816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.675822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.675836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.685756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.685806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.685819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.685826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.483 [2024-12-05 14:19:12.685832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.483 [2024-12-05 14:19:12.685846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.483 qpair failed and we were unable to recover it. 00:29:06.483 [2024-12-05 14:19:12.695775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.483 [2024-12-05 14:19:12.695824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.483 [2024-12-05 14:19:12.695837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.483 [2024-12-05 14:19:12.695844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.695850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.695864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.705819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.705875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.705888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.705895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.705901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.705916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.715844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.715891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.715904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.715911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.715918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.715932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.725905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.725977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.725991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.725998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.726004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.726018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.735888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.735943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.735956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.735963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.735969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.735984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.745951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.746004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.746017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.746024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.746030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.746044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.755911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.756013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.756026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.756033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.756039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.756053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.765949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.765998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.766012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.766019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.766025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.766039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.484 [2024-12-05 14:19:12.775985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.484 [2024-12-05 14:19:12.776039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.484 [2024-12-05 14:19:12.776052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.484 [2024-12-05 14:19:12.776059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.484 [2024-12-05 14:19:12.776065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.484 [2024-12-05 14:19:12.776080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.484 qpair failed and we were unable to recover it. 00:29:06.746 [2024-12-05 14:19:12.785972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.746 [2024-12-05 14:19:12.786071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.746 [2024-12-05 14:19:12.786084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.746 [2024-12-05 14:19:12.786091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.746 [2024-12-05 14:19:12.786097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.746 [2024-12-05 14:19:12.786112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.746 qpair failed and we were unable to recover it. 00:29:06.746 [2024-12-05 14:19:12.796027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.746 [2024-12-05 14:19:12.796072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.746 [2024-12-05 14:19:12.796085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.746 [2024-12-05 14:19:12.796098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.746 [2024-12-05 14:19:12.796105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.796119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.806080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.806123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.806136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.806143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.806149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.806163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.816117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.816170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.816195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.816204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.816211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.816230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.826049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.826107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.826132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.826141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.826148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.826167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.836157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.836209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.836224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.836231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.836238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.836258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.846183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.846239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.846264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.846273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.846280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.846300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.856214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.856318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.856343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.856351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.856359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.856378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.866278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.866336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.866351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.866358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.866364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.866380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.876269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.876322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.876336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.876343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.876349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.876363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.886284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.886333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.886349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.747 [2024-12-05 14:19:12.886356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.747 [2024-12-05 14:19:12.886366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.747 [2024-12-05 14:19:12.886381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.747 qpair failed and we were unable to recover it. 00:29:06.747 [2024-12-05 14:19:12.896318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.747 [2024-12-05 14:19:12.896367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.747 [2024-12-05 14:19:12.896380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.896387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.896394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.896408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.906345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.906397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.906410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.906417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.906424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.906438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.916352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.916397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.916410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.916417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.916423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.916438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.926404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.926465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.926479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.926490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.926497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.926512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.936416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.936467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.936481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.936487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.936494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.936508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.946491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.946549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.946563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.946569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.946576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.946590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.956499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.956552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.956565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.956572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.956578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.956593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.966490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.966566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.966579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.966586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.966593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.966611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.976486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.976553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.976566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.976573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.976579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.976593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.986608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.986661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.748 [2024-12-05 14:19:12.986675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.748 [2024-12-05 14:19:12.986682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.748 [2024-12-05 14:19:12.986688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.748 [2024-12-05 14:19:12.986703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.748 qpair failed and we were unable to recover it. 00:29:06.748 [2024-12-05 14:19:12.996596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.748 [2024-12-05 14:19:12.996663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.749 [2024-12-05 14:19:12.996676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.749 [2024-12-05 14:19:12.996683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.749 [2024-12-05 14:19:12.996689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.749 [2024-12-05 14:19:12.996704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.749 qpair failed and we were unable to recover it. 00:29:06.749 [2024-12-05 14:19:13.006610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.749 [2024-12-05 14:19:13.006683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.749 [2024-12-05 14:19:13.006696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.749 [2024-12-05 14:19:13.006703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.749 [2024-12-05 14:19:13.006710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.749 [2024-12-05 14:19:13.006724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.749 qpair failed and we were unable to recover it. 00:29:06.749 [2024-12-05 14:19:13.016621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.749 [2024-12-05 14:19:13.016664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.749 [2024-12-05 14:19:13.016677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.749 [2024-12-05 14:19:13.016684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.749 [2024-12-05 14:19:13.016690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.749 [2024-12-05 14:19:13.016704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.749 qpair failed and we were unable to recover it. 00:29:06.749 [2024-12-05 14:19:13.026636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.749 [2024-12-05 14:19:13.026679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.749 [2024-12-05 14:19:13.026692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.749 [2024-12-05 14:19:13.026699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.749 [2024-12-05 14:19:13.026705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.749 [2024-12-05 14:19:13.026719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.749 qpair failed and we were unable to recover it. 00:29:06.749 [2024-12-05 14:19:13.036702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:06.749 [2024-12-05 14:19:13.036748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:06.749 [2024-12-05 14:19:13.036761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:06.749 [2024-12-05 14:19:13.036768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:06.749 [2024-12-05 14:19:13.036774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:06.749 [2024-12-05 14:19:13.036789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:06.749 qpair failed and we were unable to recover it. 00:29:07.010 [2024-12-05 14:19:13.046687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.010 [2024-12-05 14:19:13.046782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.010 [2024-12-05 14:19:13.046795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.010 [2024-12-05 14:19:13.046802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.010 [2024-12-05 14:19:13.046808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.010 [2024-12-05 14:19:13.046822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-12-05 14:19:13.056727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.010 [2024-12-05 14:19:13.056768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.010 [2024-12-05 14:19:13.056785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.010 [2024-12-05 14:19:13.056792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.010 [2024-12-05 14:19:13.056798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.010 [2024-12-05 14:19:13.056812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-12-05 14:19:13.066749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.010 [2024-12-05 14:19:13.066806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.010 [2024-12-05 14:19:13.066819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.010 [2024-12-05 14:19:13.066826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.010 [2024-12-05 14:19:13.066832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.010 [2024-12-05 14:19:13.066846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-12-05 14:19:13.076793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.010 [2024-12-05 14:19:13.076843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.010 [2024-12-05 14:19:13.076856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.010 [2024-12-05 14:19:13.076863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.010 [2024-12-05 14:19:13.076869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.010 [2024-12-05 14:19:13.076883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.010 [2024-12-05 14:19:13.086772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.010 [2024-12-05 14:19:13.086817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.010 [2024-12-05 14:19:13.086830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.010 [2024-12-05 14:19:13.086837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.010 [2024-12-05 14:19:13.086844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.010 [2024-12-05 14:19:13.086858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.010 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.096830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.096875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.096890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.096897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.096909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.096924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.106734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.106782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.106795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.106803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.106809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.106823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.116905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.116956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.116970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.116977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.116983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.116998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.126942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.127028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.127041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.127048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.127054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.127068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.136946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.136992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.137005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.137012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.137018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.137033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.146939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.146985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.146998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.147005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.147011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.147026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.157003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.157051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.157064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.157071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.157077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.157091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.167021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.167068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.167081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.167088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.167094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.167108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.177061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.177101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.177114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.177121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.177127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.011 [2024-12-05 14:19:13.177142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.011 qpair failed and we were unable to recover it. 00:29:07.011 [2024-12-05 14:19:13.187091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.011 [2024-12-05 14:19:13.187133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.011 [2024-12-05 14:19:13.187153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.011 [2024-12-05 14:19:13.187160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.011 [2024-12-05 14:19:13.187166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.187180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.197117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.197215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.197231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.197238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.197245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.197261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.207117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.207158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.207171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.207177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.207184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.207198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.217148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.217190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.217203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.217210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.217216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.217231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.227188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.227285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.227298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.227305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.227314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.227329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.237218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.237267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.237280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.237287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.237293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.237307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.247236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.247280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.247293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.247300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.247306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.247320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.257265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.257310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.257323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.257330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.257337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.257351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.267295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.267343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.267355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.267362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.267369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.267383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.277319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.277366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.277379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.277386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.277392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.277406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.012 [2024-12-05 14:19:13.287328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.012 [2024-12-05 14:19:13.287375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.012 [2024-12-05 14:19:13.287388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.012 [2024-12-05 14:19:13.287395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.012 [2024-12-05 14:19:13.287401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.012 [2024-12-05 14:19:13.287416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.012 qpair failed and we were unable to recover it. 00:29:07.013 [2024-12-05 14:19:13.297351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.013 [2024-12-05 14:19:13.297407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.013 [2024-12-05 14:19:13.297420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.013 [2024-12-05 14:19:13.297427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.013 [2024-12-05 14:19:13.297433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.013 [2024-12-05 14:19:13.297447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.013 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.307395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.307438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.307452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.307463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.307469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.307483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.317434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.317484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.317497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.317504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.317510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.317525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.327444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.327490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.327503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.327510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.327516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.327530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.337338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.337386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.337400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.337407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.337413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.337427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.347532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.347618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.347631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.347639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.347645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.347660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.357551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.357604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.357616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.357627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.357633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.357647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.367536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.367579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.367592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.367600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.367606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.367620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.377550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.377591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.377604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.276 [2024-12-05 14:19:13.377611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.276 [2024-12-05 14:19:13.377617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.276 [2024-12-05 14:19:13.377631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.276 qpair failed and we were unable to recover it. 00:29:07.276 [2024-12-05 14:19:13.387565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.276 [2024-12-05 14:19:13.387615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.276 [2024-12-05 14:19:13.387629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.387636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.387642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.387657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.397530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.397583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.397596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.397603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.397609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.397627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.407642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.407687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.407700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.407707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.407713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.407727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.417690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.417735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.417748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.417755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.417761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.417775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.427692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.427736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.427749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.427756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.427762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.427776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.437781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.437828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.437841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.437848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.437854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.437868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.447751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.447794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.447807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.447814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.447821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.447835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.457800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.457842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.457855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.457862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.457869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.457882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.467745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.467789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.467802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.467809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.467815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.467829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.477829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.477873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.277 [2024-12-05 14:19:13.477886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.277 [2024-12-05 14:19:13.477893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.277 [2024-12-05 14:19:13.477899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.277 [2024-12-05 14:19:13.477913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.277 qpair failed and we were unable to recover it. 00:29:07.277 [2024-12-05 14:19:13.487848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.277 [2024-12-05 14:19:13.487891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.487904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.487915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.487921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.487936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.497914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.497956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.497969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.497976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.497983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.497997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.507931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.507981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.507994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.508001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.508007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.508021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.517967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.518034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.518047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.518054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.518060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.518074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.527960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.528003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.528017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.528024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.528030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.528047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.538009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.538051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.538065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.538071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.538078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.538092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.548036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.548083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.548096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.548102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.548108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.548122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.558082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.558131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.558144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.558150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.558157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.558171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.278 [2024-12-05 14:19:13.568073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.278 [2024-12-05 14:19:13.568126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.278 [2024-12-05 14:19:13.568151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.278 [2024-12-05 14:19:13.568160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.278 [2024-12-05 14:19:13.568166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.278 [2024-12-05 14:19:13.568186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.278 qpair failed and we were unable to recover it. 00:29:07.541 [2024-12-05 14:19:13.578126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.541 [2024-12-05 14:19:13.578176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.541 [2024-12-05 14:19:13.578201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.541 [2024-12-05 14:19:13.578209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.541 [2024-12-05 14:19:13.578216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.541 [2024-12-05 14:19:13.578235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.541 [2024-12-05 14:19:13.588147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.541 [2024-12-05 14:19:13.588210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.541 [2024-12-05 14:19:13.588225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.541 [2024-12-05 14:19:13.588232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.541 [2024-12-05 14:19:13.588239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.541 [2024-12-05 14:19:13.588255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.541 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.598169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.598219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.598233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.598240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.598246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.598261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.608179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.608220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.608234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.608241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.608248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.608262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.618244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.618286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.618305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.618313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.618320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.618337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.628246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.628293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.628306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.628313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.628319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.628334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.638295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.638343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.638357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.638364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.638370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.638384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.648260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.648306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.648319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.648326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.648332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.648346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.658333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.658376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.658389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.658396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.658406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.658420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.668374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.668427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.668441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.668448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.668458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.668473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.678401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.678479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.678492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.678499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.678505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.678520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.688407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.542 [2024-12-05 14:19:13.688472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.542 [2024-12-05 14:19:13.688486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.542 [2024-12-05 14:19:13.688493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.542 [2024-12-05 14:19:13.688499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.542 [2024-12-05 14:19:13.688514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.542 qpair failed and we were unable to recover it. 00:29:07.542 [2024-12-05 14:19:13.698441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.698498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.698511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.698518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.698525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.698539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.708482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.708533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.708546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.708553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.708559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.708574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.718508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.718572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.718585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.718592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.718598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.718613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.728520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.728560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.728573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.728579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.728586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.728600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.738530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.738571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.738584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.738591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.738597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.738611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.748549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.748594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.748609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.748616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.748623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.748637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.758616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.758704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.758717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.758724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.758730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.758744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.768655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.768700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.768712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.768719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.768726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.768739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.778692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.778731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.778744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.778751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.778757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.778771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.788687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.543 [2024-12-05 14:19:13.788732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.543 [2024-12-05 14:19:13.788745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.543 [2024-12-05 14:19:13.788752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.543 [2024-12-05 14:19:13.788762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.543 [2024-12-05 14:19:13.788777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.543 qpair failed and we were unable to recover it. 00:29:07.543 [2024-12-05 14:19:13.798693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.544 [2024-12-05 14:19:13.798736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.544 [2024-12-05 14:19:13.798749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.544 [2024-12-05 14:19:13.798756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.544 [2024-12-05 14:19:13.798762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.544 [2024-12-05 14:19:13.798776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-12-05 14:19:13.808762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.544 [2024-12-05 14:19:13.808807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.544 [2024-12-05 14:19:13.808820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.544 [2024-12-05 14:19:13.808827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.544 [2024-12-05 14:19:13.808833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.544 [2024-12-05 14:19:13.808848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-12-05 14:19:13.818760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.544 [2024-12-05 14:19:13.818815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.544 [2024-12-05 14:19:13.818828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.544 [2024-12-05 14:19:13.818835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.544 [2024-12-05 14:19:13.818841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.544 [2024-12-05 14:19:13.818855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.544 [2024-12-05 14:19:13.828680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.544 [2024-12-05 14:19:13.828728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.544 [2024-12-05 14:19:13.828742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.544 [2024-12-05 14:19:13.828749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.544 [2024-12-05 14:19:13.828755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.544 [2024-12-05 14:19:13.828769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.544 qpair failed and we were unable to recover it. 00:29:07.806 [2024-12-05 14:19:13.838821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.806 [2024-12-05 14:19:13.838868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.806 [2024-12-05 14:19:13.838881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.806 [2024-12-05 14:19:13.838888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.806 [2024-12-05 14:19:13.838895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.806 [2024-12-05 14:19:13.838909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-12-05 14:19:13.848820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.806 [2024-12-05 14:19:13.848866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.806 [2024-12-05 14:19:13.848879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.806 [2024-12-05 14:19:13.848886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.806 [2024-12-05 14:19:13.848892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.806 [2024-12-05 14:19:13.848906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.806 qpair failed and we were unable to recover it. 00:29:07.806 [2024-12-05 14:19:13.858752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.858796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.858809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.858816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.858822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.807 [2024-12-05 14:19:13.858836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.868898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.868945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.868958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.868965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.868971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.807 [2024-12-05 14:19:13.868985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.878949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.878993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.879006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.879013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.879020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.807 [2024-12-05 14:19:13.879034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.888954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.889014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.889027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.889034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.889040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa0000b90 00:29:07.807 [2024-12-05 14:19:13.889054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.898996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.899138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.899203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.899230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.899251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aac000b90 00:29:07.807 [2024-12-05 14:19:13.899304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.909006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.909087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.909135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.909153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.909169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aac000b90 00:29:07.807 [2024-12-05 14:19:13.909209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.909684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1e10 is same with the state(6) to be set 00:29:07.807 [2024-12-05 14:19:13.919049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.919147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.919221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.919247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.919268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa4000b90 00:29:07.807 [2024-12-05 14:19:13.919324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.929065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.929147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.929194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.929213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.929228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2aa4000b90 00:29:07.807 [2024-12-05 14:19:13.929268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.939093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.939186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.939250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.939275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.939296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19cc0c0 00:29:07.807 [2024-12-05 14:19:13.939350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.807 [2024-12-05 14:19:13.949127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:07.807 [2024-12-05 14:19:13.949211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:07.807 [2024-12-05 14:19:13.949258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:07.807 [2024-12-05 14:19:13.949277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:07.807 [2024-12-05 14:19:13.949292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x19cc0c0 00:29:07.807 [2024-12-05 14:19:13.949331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.807 qpair failed and we were unable to recover it. 00:29:07.808 [2024-12-05 14:19:13.949984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c1e10 (9): Bad file descriptor 00:29:07.808 Initializing NVMe Controllers 00:29:07.808 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:07.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:07.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:07.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:07.808 Initialization complete. Launching workers. 00:29:07.808 Starting thread on core 1 00:29:07.808 Starting thread on core 2 00:29:07.808 Starting thread on core 3 00:29:07.808 Starting thread on core 0 00:29:07.808 14:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:07.808 00:29:07.808 real 0m11.516s 00:29:07.808 user 0m21.917s 00:29:07.808 sys 0m3.734s 00:29:07.808 14:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.808 14:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.808 ************************************ 00:29:07.808 END TEST nvmf_target_disconnect_tc2 00:29:07.808 ************************************ 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:07.808 rmmod nvme_tcp 00:29:07.808 rmmod nvme_fabrics 00:29:07.808 rmmod nvme_keyring 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2916477 ']' 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2916477 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2916477 ']' 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2916477 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.808 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2916477 00:29:08.069 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:08.069 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:08.069 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2916477' 00:29:08.069 killing process with pid 2916477 00:29:08.069 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2916477 00:29:08.069 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2916477 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.070 14:19:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.613 14:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.613 00:29:10.613 real 0m21.958s 00:29:10.613 user 0m50.113s 00:29:10.613 sys 0m9.930s 00:29:10.613 14:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.613 14:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:10.613 ************************************ 00:29:10.613 END TEST nvmf_target_disconnect 00:29:10.613 ************************************ 00:29:10.613 14:19:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:10.613 00:29:10.613 real 6m30.835s 00:29:10.613 user 11m23.844s 00:29:10.613 sys 2m14.302s 00:29:10.613 14:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.613 14:19:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.613 ************************************ 00:29:10.613 END TEST nvmf_host 00:29:10.613 ************************************ 00:29:10.613 14:19:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:10.613 14:19:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:10.613 14:19:16 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:10.613 14:19:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:10.613 14:19:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.613 14:19:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:10.613 ************************************ 00:29:10.613 START TEST nvmf_target_core_interrupt_mode 00:29:10.613 ************************************ 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:10.613 * Looking for test storage... 00:29:10.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.613 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:10.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.614 --rc genhtml_branch_coverage=1 00:29:10.614 --rc genhtml_function_coverage=1 00:29:10.614 --rc genhtml_legend=1 00:29:10.614 --rc geninfo_all_blocks=1 00:29:10.614 --rc geninfo_unexecuted_blocks=1 00:29:10.614 00:29:10.614 ' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:10.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.614 --rc genhtml_branch_coverage=1 00:29:10.614 --rc genhtml_function_coverage=1 00:29:10.614 --rc genhtml_legend=1 00:29:10.614 --rc geninfo_all_blocks=1 00:29:10.614 --rc geninfo_unexecuted_blocks=1 00:29:10.614 00:29:10.614 ' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:10.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.614 --rc genhtml_branch_coverage=1 00:29:10.614 --rc genhtml_function_coverage=1 00:29:10.614 --rc genhtml_legend=1 00:29:10.614 --rc geninfo_all_blocks=1 00:29:10.614 --rc geninfo_unexecuted_blocks=1 00:29:10.614 00:29:10.614 ' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:10.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.614 --rc genhtml_branch_coverage=1 00:29:10.614 --rc genhtml_function_coverage=1 00:29:10.614 --rc genhtml_legend=1 00:29:10.614 --rc geninfo_all_blocks=1 00:29:10.614 --rc geninfo_unexecuted_blocks=1 00:29:10.614 00:29:10.614 ' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:10.614 ************************************ 00:29:10.614 START TEST nvmf_abort 00:29:10.614 ************************************ 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:10.614 * Looking for test storage... 00:29:10.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:29:10.614 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:10.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.877 --rc genhtml_branch_coverage=1 00:29:10.877 --rc genhtml_function_coverage=1 00:29:10.877 --rc genhtml_legend=1 00:29:10.877 --rc geninfo_all_blocks=1 00:29:10.877 --rc geninfo_unexecuted_blocks=1 00:29:10.877 00:29:10.877 ' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:10.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.877 --rc genhtml_branch_coverage=1 00:29:10.877 --rc genhtml_function_coverage=1 00:29:10.877 --rc genhtml_legend=1 00:29:10.877 --rc geninfo_all_blocks=1 00:29:10.877 --rc geninfo_unexecuted_blocks=1 00:29:10.877 00:29:10.877 ' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:10.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.877 --rc genhtml_branch_coverage=1 00:29:10.877 --rc genhtml_function_coverage=1 00:29:10.877 --rc genhtml_legend=1 00:29:10.877 --rc geninfo_all_blocks=1 00:29:10.877 --rc geninfo_unexecuted_blocks=1 00:29:10.877 00:29:10.877 ' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:10.877 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.877 --rc genhtml_branch_coverage=1 00:29:10.877 --rc genhtml_function_coverage=1 00:29:10.877 --rc genhtml_legend=1 00:29:10.877 --rc geninfo_all_blocks=1 00:29:10.877 --rc geninfo_unexecuted_blocks=1 00:29:10.877 00:29:10.877 ' 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:10.877 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.878 14:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:19.027 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:19.027 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.027 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:19.028 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:19.028 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:29:19.028 00:29:19.028 --- 10.0.0.2 ping statistics --- 00:29:19.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.028 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:29:19.028 00:29:19.028 --- 10.0.0.1 ping statistics --- 00:29:19.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.028 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2921959 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2921959 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2921959 ']' 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.028 14:19:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.028 [2024-12-05 14:19:24.506081] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:19.028 [2024-12-05 14:19:24.507207] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:29:19.028 [2024-12-05 14:19:24.507254] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.028 [2024-12-05 14:19:24.605412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:19.028 [2024-12-05 14:19:24.656702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.028 [2024-12-05 14:19:24.656751] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.028 [2024-12-05 14:19:24.656760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.028 [2024-12-05 14:19:24.656767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.028 [2024-12-05 14:19:24.656773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.028 [2024-12-05 14:19:24.658839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.028 [2024-12-05 14:19:24.659006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.028 [2024-12-05 14:19:24.659007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.028 [2024-12-05 14:19:24.736853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:19.028 [2024-12-05 14:19:24.737882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:19.028 [2024-12-05 14:19:24.738270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:19.028 [2024-12-05 14:19:24.738416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:19.028 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.028 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:19.028 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:19.028 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.028 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 [2024-12-05 14:19:25.371912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 Malloc0 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 Delay0 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 [2024-12-05 14:19:25.479862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.318 14:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:19.595 [2024-12-05 14:19:25.664664] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:21.506 Initializing NVMe Controllers 00:29:21.506 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:21.506 controller IO queue size 128 less than required 00:29:21.506 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:21.506 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:21.506 Initialization complete. Launching workers. 00:29:21.506 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28587 00:29:21.506 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28648, failed to submit 66 00:29:21.506 success 28587, unsuccessful 61, failed 0 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.506 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.506 rmmod nvme_tcp 00:29:21.506 rmmod nvme_fabrics 00:29:21.506 rmmod nvme_keyring 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2921959 ']' 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2921959 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2921959 ']' 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2921959 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2921959 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2921959' 00:29:21.768 killing process with pid 2921959 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2921959 00:29:21.768 14:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2921959 00:29:21.768 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.768 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.768 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.768 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.029 14:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.940 00:29:23.940 real 0m13.399s 00:29:23.940 user 0m11.102s 00:29:23.940 sys 0m6.955s 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 END TEST nvmf_abort 00:29:23.940 ************************************ 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:23.940 ************************************ 00:29:23.940 START TEST nvmf_ns_hotplug_stress 00:29:23.940 ************************************ 00:29:23.940 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:24.203 * Looking for test storage... 00:29:24.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.203 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.204 --rc genhtml_branch_coverage=1 00:29:24.204 --rc genhtml_function_coverage=1 00:29:24.204 --rc genhtml_legend=1 00:29:24.204 --rc geninfo_all_blocks=1 00:29:24.204 --rc geninfo_unexecuted_blocks=1 00:29:24.204 00:29:24.204 ' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.204 --rc genhtml_branch_coverage=1 00:29:24.204 --rc genhtml_function_coverage=1 00:29:24.204 --rc genhtml_legend=1 00:29:24.204 --rc geninfo_all_blocks=1 00:29:24.204 --rc geninfo_unexecuted_blocks=1 00:29:24.204 00:29:24.204 ' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.204 --rc genhtml_branch_coverage=1 00:29:24.204 --rc genhtml_function_coverage=1 00:29:24.204 --rc genhtml_legend=1 00:29:24.204 --rc geninfo_all_blocks=1 00:29:24.204 --rc geninfo_unexecuted_blocks=1 00:29:24.204 00:29:24.204 ' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:24.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.204 --rc genhtml_branch_coverage=1 00:29:24.204 --rc genhtml_function_coverage=1 00:29:24.204 --rc genhtml_legend=1 00:29:24.204 --rc geninfo_all_blocks=1 00:29:24.204 --rc geninfo_unexecuted_blocks=1 00:29:24.204 00:29:24.204 ' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.204 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.205 14:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.348 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:32.349 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:32.349 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:32.349 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:32.349 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.349 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:29:32.350 00:29:32.350 --- 10.0.0.2 ping statistics --- 00:29:32.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.350 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:29:32.350 00:29:32.350 --- 10.0.0.1 ping statistics --- 00:29:32.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.350 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2926669 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2926669 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2926669 ']' 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.350 14:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:32.350 [2024-12-05 14:19:37.989757] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:32.350 [2024-12-05 14:19:37.990926] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:29:32.350 [2024-12-05 14:19:37.990981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.350 [2024-12-05 14:19:38.078604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:32.350 [2024-12-05 14:19:38.134051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.350 [2024-12-05 14:19:38.134099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.350 [2024-12-05 14:19:38.134109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.350 [2024-12-05 14:19:38.134116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.350 [2024-12-05 14:19:38.134122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.350 [2024-12-05 14:19:38.135990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.350 [2024-12-05 14:19:38.136124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.350 [2024-12-05 14:19:38.136124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.350 [2024-12-05 14:19:38.214301] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:32.350 [2024-12-05 14:19:38.215440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:32.350 [2024-12-05 14:19:38.215765] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:32.350 [2024-12-05 14:19:38.215924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:32.612 14:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:32.873 [2024-12-05 14:19:39.029200] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.873 14:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:33.134 14:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.134 [2024-12-05 14:19:39.397882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.134 14:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:33.395 14:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:33.656 Malloc0 00:29:33.656 14:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:33.916 Delay0 00:29:33.917 14:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.917 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:34.177 NULL1 00:29:34.177 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:34.437 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2927350 00:29:34.437 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:34.437 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:34.437 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.699 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.699 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:34.699 14:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:34.960 true 00:29:34.960 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:34.960 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.221 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.483 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:35.483 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:35.744 true 00:29:35.744 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:35.744 14:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.744 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.006 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:36.006 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:36.267 true 00:29:36.267 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:36.267 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.528 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.528 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:36.528 14:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:36.788 true 00:29:36.788 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:36.788 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.049 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.309 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:37.309 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:37.309 true 00:29:37.309 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:37.309 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.570 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.831 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:37.831 14:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:37.831 true 00:29:37.831 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:37.831 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.092 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.352 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:38.352 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:38.352 true 00:29:38.612 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:38.612 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.612 14:19:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.873 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:38.873 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:39.134 true 00:29:39.134 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:39.134 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.134 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.395 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:39.395 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:39.655 true 00:29:39.655 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:39.655 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.914 14:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.914 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:39.914 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:40.174 true 00:29:40.174 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:40.174 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.433 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.433 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:40.433 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:40.693 true 00:29:40.693 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:40.693 14:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.954 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.215 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:41.215 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:41.215 true 00:29:41.215 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:41.215 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.476 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.736 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:41.736 14:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:41.736 true 00:29:41.736 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:41.736 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.997 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.258 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:42.258 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:42.258 true 00:29:42.518 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:42.518 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.518 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.778 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:42.778 14:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:43.039 true 00:29:43.039 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:43.039 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.039 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.300 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:43.300 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:43.561 true 00:29:43.561 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:43.561 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.561 14:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.821 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:43.821 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:44.080 true 00:29:44.080 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:44.081 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.340 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.340 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:44.340 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:44.599 true 00:29:44.599 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:44.599 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.857 14:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.857 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:44.857 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:45.115 true 00:29:45.115 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:45.115 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.373 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.373 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:45.373 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:45.631 true 00:29:45.631 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:45.631 14:19:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.889 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.147 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:46.147 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:46.147 true 00:29:46.147 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:46.147 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.406 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.666 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:46.666 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:46.666 true 00:29:46.666 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:46.666 14:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.926 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.187 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:47.187 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:47.187 true 00:29:47.448 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:47.448 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.448 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.707 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:47.707 14:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:47.707 true 00:29:47.968 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:47.968 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:47.968 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.228 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:48.228 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:48.487 true 00:29:48.487 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:48.487 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.746 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.746 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:48.746 14:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:49.009 true 00:29:49.009 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:49.009 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.272 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.272 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:49.272 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:49.531 true 00:29:49.531 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:49.531 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.791 14:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.791 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:49.791 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:50.049 true 00:29:50.049 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:50.049 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.307 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.566 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:50.566 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:50.566 true 00:29:50.566 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:50.566 14:19:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.824 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.082 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:51.082 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:51.082 true 00:29:51.082 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:51.083 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.342 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.602 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:29:51.602 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:29:51.862 true 00:29:51.863 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:51.863 14:19:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.863 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.122 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:29:52.122 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:29:52.383 true 00:29:52.383 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:52.383 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.644 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.644 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:29:52.644 14:19:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:29:52.905 true 00:29:52.905 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:52.905 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.166 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.166 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:29:53.166 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:29:53.426 true 00:29:53.426 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:53.426 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.687 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.951 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:29:53.951 14:19:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:29:53.951 true 00:29:53.951 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:53.951 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.212 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.471 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:29:54.471 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:29:54.471 true 00:29:54.471 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:54.471 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.731 14:20:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.990 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:29:54.990 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:29:54.990 true 00:29:54.990 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:54.990 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.249 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.508 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:29:55.508 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:29:55.765 true 00:29:55.765 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:55.765 14:20:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.765 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.024 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:29:56.024 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:29:56.282 true 00:29:56.282 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:56.282 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.542 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.542 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:29:56.542 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:29:56.801 true 00:29:56.801 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:56.801 14:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.060 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.060 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:29:57.060 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:29:57.320 true 00:29:57.320 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:57.320 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.580 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.841 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:29:57.841 14:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:29:57.841 true 00:29:57.841 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:57.841 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.100 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.358 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:29:58.358 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:29:58.358 true 00:29:58.358 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:58.358 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.616 14:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.875 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:29:58.875 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:29:58.875 true 00:29:59.134 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:59.134 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.134 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.391 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:29:59.391 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:29:59.650 true 00:29:59.650 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:29:59.650 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.650 14:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.909 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:29:59.909 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:00.168 true 00:30:00.168 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:00.168 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.428 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.428 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:00.428 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:00.688 true 00:30:00.688 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:00.688 14:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.948 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.948 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:00.948 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:01.207 true 00:30:01.207 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:01.207 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.467 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:01.726 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:01.726 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:01.726 true 00:30:01.726 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:01.726 14:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.986 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.246 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:02.246 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:02.246 true 00:30:02.246 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:02.246 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.505 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.766 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:02.766 14:20:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:03.027 true 00:30:03.027 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:03.027 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.027 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.287 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:03.287 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:03.546 true 00:30:03.546 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:03.546 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.807 14:20:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.807 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:03.807 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:04.067 true 00:30:04.067 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:04.067 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.326 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.326 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:04.326 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:04.587 true 00:30:04.587 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:04.587 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.587 Initializing NVMe Controllers 00:30:04.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.587 Controller IO queue size 128, less than required. 00:30:04.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:04.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:04.587 Initialization complete. Launching workers. 00:30:04.587 ======================================================== 00:30:04.587 Latency(us) 00:30:04.587 Device Information : IOPS MiB/s Average min max 00:30:04.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30337.27 14.81 4219.12 1123.40 11181.12 00:30:04.587 ======================================================== 00:30:04.587 Total : 30337.27 14.81 4219.12 1123.40 11181.12 00:30:04.587 00:30:04.846 14:20:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.846 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:04.846 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:05.105 true 00:30:05.106 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2927350 00:30:05.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2927350) - No such process 00:30:05.106 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2927350 00:30:05.106 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.366 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:05.626 null0 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:05.626 14:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:05.891 null1 00:30:05.891 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:05.891 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:05.891 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:06.213 null2 00:30:06.213 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:06.213 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:06.213 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:06.213 null3 00:30:06.213 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:06.213 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:06.213 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:06.546 null4 00:30:06.546 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:06.546 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:06.546 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:06.546 null5 00:30:06.546 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:06.546 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:06.546 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:06.809 null6 00:30:06.809 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:06.809 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:06.809 14:20:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:06.809 null7 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.068 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2933526 2933528 2933529 2933531 2933533 2933535 2933537 2933538 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:07.069 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.328 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.588 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.848 14:20:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.848 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.107 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.367 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:08.627 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:08.628 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:08.628 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:08.628 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:08.888 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.888 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.149 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.150 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.410 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.670 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.929 14:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.929 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.188 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:10.189 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.449 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.709 14:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.970 rmmod nvme_tcp 00:30:10.970 rmmod nvme_fabrics 00:30:10.970 rmmod nvme_keyring 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2926669 ']' 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2926669 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2926669 ']' 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2926669 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:10.970 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.971 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2926669 00:30:10.971 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:10.971 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:10.971 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2926669' 00:30:10.971 killing process with pid 2926669 00:30:10.971 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2926669 00:30:10.971 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2926669 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.232 14:20:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:13.771 00:30:13.771 real 0m49.216s 00:30:13.771 user 3m4.124s 00:30:13.771 sys 0m21.902s 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:13.771 ************************************ 00:30:13.771 END TEST nvmf_ns_hotplug_stress 00:30:13.771 ************************************ 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:13.771 ************************************ 00:30:13.771 START TEST nvmf_delete_subsystem 00:30:13.771 ************************************ 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:13.771 * Looking for test storage... 00:30:13.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.771 --rc genhtml_branch_coverage=1 00:30:13.771 --rc genhtml_function_coverage=1 00:30:13.771 --rc genhtml_legend=1 00:30:13.771 --rc geninfo_all_blocks=1 00:30:13.771 --rc geninfo_unexecuted_blocks=1 00:30:13.771 00:30:13.771 ' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.771 --rc genhtml_branch_coverage=1 00:30:13.771 --rc genhtml_function_coverage=1 00:30:13.771 --rc genhtml_legend=1 00:30:13.771 --rc geninfo_all_blocks=1 00:30:13.771 --rc geninfo_unexecuted_blocks=1 00:30:13.771 00:30:13.771 ' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.771 --rc genhtml_branch_coverage=1 00:30:13.771 --rc genhtml_function_coverage=1 00:30:13.771 --rc genhtml_legend=1 00:30:13.771 --rc geninfo_all_blocks=1 00:30:13.771 --rc geninfo_unexecuted_blocks=1 00:30:13.771 00:30:13.771 ' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:13.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.771 --rc genhtml_branch_coverage=1 00:30:13.771 --rc genhtml_function_coverage=1 00:30:13.771 --rc genhtml_legend=1 00:30:13.771 --rc geninfo_all_blocks=1 00:30:13.771 --rc geninfo_unexecuted_blocks=1 00:30:13.771 00:30:13.771 ' 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.771 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.772 14:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:21.902 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.902 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:21.903 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:21.903 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:21.903 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.903 14:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:30:21.903 00:30:21.903 --- 10.0.0.2 ping statistics --- 00:30:21.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.903 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:30:21.903 00:30:21.903 --- 10.0.0.1 ping statistics --- 00:30:21.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.903 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2938699 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2938699 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2938699 ']' 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.903 14:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.903 [2024-12-05 14:20:27.273680] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:21.903 [2024-12-05 14:20:27.274811] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:30:21.903 [2024-12-05 14:20:27.274861] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.903 [2024-12-05 14:20:27.372915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:21.904 [2024-12-05 14:20:27.423667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.904 [2024-12-05 14:20:27.423715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.904 [2024-12-05 14:20:27.423724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.904 [2024-12-05 14:20:27.423732] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.904 [2024-12-05 14:20:27.423738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.904 [2024-12-05 14:20:27.425386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.904 [2024-12-05 14:20:27.425391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.904 [2024-12-05 14:20:27.503107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:21.904 [2024-12-05 14:20:27.503822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:21.904 [2024-12-05 14:20:27.504064] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.904 [2024-12-05 14:20:28.130450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.904 [2024-12-05 14:20:28.162906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.904 NULL1 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:21.904 Delay0 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.904 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:22.164 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.164 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2938901 00:30:22.164 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:22.164 14:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:22.164 [2024-12-05 14:20:28.289356] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:24.074 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.074 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.074 14:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 [2024-12-05 14:20:30.428964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7680 is same with the state(6) to be set 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 Read completed with error (sct=0, sc=8) 00:30:24.334 Write completed with error (sct=0, sc=8) 00:30:24.334 starting I/O failed: -6 00:30:24.335 starting I/O failed: -6 00:30:24.335 starting I/O failed: -6 00:30:24.335 starting I/O failed: -6 00:30:24.335 starting I/O failed: -6 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 Read completed with error (sct=0, sc=8) 00:30:24.335 starting I/O failed: -6 00:30:24.335 Write completed with error (sct=0, sc=8) 00:30:24.335 [2024-12-05 14:20:30.433169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd3000d020 is same with the state(6) to be set 00:30:25.275 [2024-12-05 14:20:31.389756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f89b0 is same with the state(6) to be set 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 [2024-12-05 14:20:31.433379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f74a0 is same with the state(6) to be set 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 [2024-12-05 14:20:31.433531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7860 is same with the state(6) to be set 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 [2024-12-05 14:20:31.435071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd30000c40 is same with the state(6) to be set 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Write completed with error (sct=0, sc=8) 00:30:25.275 Read completed with error (sct=0, sc=8) 00:30:25.275 [2024-12-05 14:20:31.435435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efd3000d350 is same with the state(6) to be set 00:30:25.275 Initializing NVMe Controllers 00:30:25.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.275 Controller IO queue size 128, less than required. 00:30:25.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:25.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:25.275 Initialization complete. Launching workers. 00:30:25.275 ======================================================== 00:30:25.275 Latency(us) 00:30:25.275 Device Information : IOPS MiB/s Average min max 00:30:25.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.58 0.08 895930.41 425.00 1009669.41 00:30:25.275 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.54 0.09 940105.17 449.58 2002551.10 00:30:25.276 ======================================================== 00:30:25.276 Total : 346.12 0.17 918462.07 425.00 2002551.10 00:30:25.276 00:30:25.276 [2024-12-05 14:20:31.435925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f89b0 (9): Bad file descriptor 00:30:25.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:25.276 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.276 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:25.276 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2938901 00:30:25.276 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:25.846 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:25.846 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2938901 00:30:25.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2938901) - No such process 00:30:25.846 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2938901 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2938901 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2938901 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:25.847 [2024-12-05 14:20:31.970831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2939614 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:25.847 14:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:25.847 [2024-12-05 14:20:32.069494] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:26.417 14:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:26.417 14:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:26.417 14:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:26.987 14:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:26.987 14:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:26.987 14:20:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:27.247 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:27.247 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:27.247 14:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:27.815 14:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:27.815 14:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:27.815 14:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:28.383 14:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:28.383 14:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:28.383 14:20:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:28.968 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:28.969 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:28.969 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:29.233 Initializing NVMe Controllers 00:30:29.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.233 Controller IO queue size 128, less than required. 00:30:29.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:29.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:29.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:29.233 Initialization complete. Launching workers. 00:30:29.233 ======================================================== 00:30:29.233 Latency(us) 00:30:29.233 Device Information : IOPS MiB/s Average min max 00:30:29.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002559.23 1000282.39 1041294.23 00:30:29.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004172.35 1000342.52 1009654.00 00:30:29.233 ======================================================== 00:30:29.233 Total : 256.00 0.12 1003365.79 1000282.39 1041294.23 00:30:29.233 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2939614 00:30:29.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2939614) - No such process 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2939614 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:29.233 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.492 rmmod nvme_tcp 00:30:29.492 rmmod nvme_fabrics 00:30:29.492 rmmod nvme_keyring 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2938699 ']' 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2938699 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2938699 ']' 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2938699 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938699 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938699' 00:30:29.492 killing process with pid 2938699 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2938699 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2938699 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.492 14:20:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.025 00:30:32.025 real 0m18.319s 00:30:32.025 user 0m26.812s 00:30:32.025 sys 0m7.372s 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:32.025 ************************************ 00:30:32.025 END TEST nvmf_delete_subsystem 00:30:32.025 ************************************ 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.025 ************************************ 00:30:32.025 START TEST nvmf_host_management 00:30:32.025 ************************************ 00:30:32.025 14:20:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:32.025 * Looking for test storage... 00:30:32.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:32.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.025 --rc genhtml_branch_coverage=1 00:30:32.025 --rc genhtml_function_coverage=1 00:30:32.025 --rc genhtml_legend=1 00:30:32.025 --rc geninfo_all_blocks=1 00:30:32.025 --rc geninfo_unexecuted_blocks=1 00:30:32.025 00:30:32.025 ' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:32.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.025 --rc genhtml_branch_coverage=1 00:30:32.025 --rc genhtml_function_coverage=1 00:30:32.025 --rc genhtml_legend=1 00:30:32.025 --rc geninfo_all_blocks=1 00:30:32.025 --rc geninfo_unexecuted_blocks=1 00:30:32.025 00:30:32.025 ' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:32.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.025 --rc genhtml_branch_coverage=1 00:30:32.025 --rc genhtml_function_coverage=1 00:30:32.025 --rc genhtml_legend=1 00:30:32.025 --rc geninfo_all_blocks=1 00:30:32.025 --rc geninfo_unexecuted_blocks=1 00:30:32.025 00:30:32.025 ' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:32.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.025 --rc genhtml_branch_coverage=1 00:30:32.025 --rc genhtml_function_coverage=1 00:30:32.025 --rc genhtml_legend=1 00:30:32.025 --rc geninfo_all_blocks=1 00:30:32.025 --rc geninfo_unexecuted_blocks=1 00:30:32.025 00:30:32.025 ' 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.025 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.026 14:20:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.161 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:40.162 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:40.162 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:40.162 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:40.162 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:40.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:30:40.162 00:30:40.162 --- 10.0.0.2 ping statistics --- 00:30:40.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.162 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:30:40.162 00:30:40.162 --- 10.0.0.1 ping statistics --- 00:30:40.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.162 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2944399 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2944399 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2944399 ']' 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.162 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.163 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.163 14:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.163 [2024-12-05 14:20:45.690837] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:40.163 [2024-12-05 14:20:45.691987] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:30:40.163 [2024-12-05 14:20:45.692035] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.163 [2024-12-05 14:20:45.789997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:40.163 [2024-12-05 14:20:45.842634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.163 [2024-12-05 14:20:45.842682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.163 [2024-12-05 14:20:45.842691] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.163 [2024-12-05 14:20:45.842698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.163 [2024-12-05 14:20:45.842705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.163 [2024-12-05 14:20:45.844707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.163 [2024-12-05 14:20:45.844871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.163 [2024-12-05 14:20:45.845032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.163 [2024-12-05 14:20:45.845032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:40.163 [2024-12-05 14:20:45.923716] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:40.163 [2024-12-05 14:20:45.924665] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:40.163 [2024-12-05 14:20:45.925065] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:40.163 [2024-12-05 14:20:45.925571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:40.163 [2024-12-05 14:20:45.925630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.425 [2024-12-05 14:20:46.541892] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.425 Malloc0 00:30:40.425 [2024-12-05 14:20:46.650218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2944724 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2944724 /var/tmp/bdevperf.sock 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2944724 ']' 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:40.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:40.425 { 00:30:40.425 "params": { 00:30:40.425 "name": "Nvme$subsystem", 00:30:40.425 "trtype": "$TEST_TRANSPORT", 00:30:40.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.425 "adrfam": "ipv4", 00:30:40.425 "trsvcid": "$NVMF_PORT", 00:30:40.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.425 "hdgst": ${hdgst:-false}, 00:30:40.425 "ddgst": ${ddgst:-false} 00:30:40.425 }, 00:30:40.425 "method": "bdev_nvme_attach_controller" 00:30:40.425 } 00:30:40.425 EOF 00:30:40.425 )") 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:40.425 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:40.685 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:40.685 14:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:40.685 "params": { 00:30:40.685 "name": "Nvme0", 00:30:40.685 "trtype": "tcp", 00:30:40.685 "traddr": "10.0.0.2", 00:30:40.685 "adrfam": "ipv4", 00:30:40.685 "trsvcid": "4420", 00:30:40.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.685 "hdgst": false, 00:30:40.685 "ddgst": false 00:30:40.685 }, 00:30:40.685 "method": "bdev_nvme_attach_controller" 00:30:40.685 }' 00:30:40.685 [2024-12-05 14:20:46.771649] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:30:40.685 [2024-12-05 14:20:46.771726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944724 ] 00:30:40.685 [2024-12-05 14:20:46.866495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.685 [2024-12-05 14:20:46.919561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.946 Running I/O for 10 seconds... 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.522 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:41.522 [2024-12-05 14:20:47.675893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.675962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.675972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.675980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.675988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.675996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.522 [2024-12-05 14:20:47.676253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143c0e0 is same with the state(6) to be set 00:30:41.523 [2024-12-05 14:20:47.676594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.676984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.676992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.523 [2024-12-05 14:20:47.677184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.523 [2024-12-05 14:20:47.677194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.524 [2024-12-05 14:20:47.677794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.524 [2024-12-05 14:20:47.677804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25caee0 is same with the state(6) to be set 00:30:41.524 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.524 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:41.524 [2024-12-05 14:20:47.679148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:41.524 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.524 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:41.524 task offset: 114688 on job bdev=Nvme0n1 fails 00:30:41.524 00:30:41.524 Latency(us) 00:30:41.524 [2024-12-05T13:20:47.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.524 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:41.524 Job: Nvme0n1 ended in about 0.59 seconds with error 00:30:41.524 Verification LBA range: start 0x0 length 0x400 00:30:41.524 Nvme0n1 : 0.59 1512.15 94.51 108.01 0.00 38560.21 4423.68 33423.36 00:30:41.524 [2024-12-05T13:20:47.824Z] =================================================================================================================== 00:30:41.524 [2024-12-05T13:20:47.824Z] Total : 1512.15 94.51 108.01 0.00 38560.21 4423.68 33423.36 00:30:41.525 [2024-12-05 14:20:47.681400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:41.525 [2024-12-05 14:20:47.681443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2010 (9): Bad file descriptor 00:30:41.525 [2024-12-05 14:20:47.683157] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:30:41.525 [2024-12-05 14:20:47.683312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:41.525 [2024-12-05 14:20:47.683342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:41.525 [2024-12-05 14:20:47.683360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:30:41.525 [2024-12-05 14:20:47.683370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:30:41.525 [2024-12-05 14:20:47.683385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:41.525 [2024-12-05 14:20:47.683393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23b2010 00:30:41.525 [2024-12-05 14:20:47.683417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2010 (9): Bad file descriptor 00:30:41.525 [2024-12-05 14:20:47.683431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:41.525 [2024-12-05 14:20:47.683440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:41.525 [2024-12-05 14:20:47.683450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:41.525 [2024-12-05 14:20:47.683468] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:41.525 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.525 14:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2944724 00:30:42.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2944724) - No such process 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:42.470 { 00:30:42.470 "params": { 00:30:42.470 "name": "Nvme$subsystem", 00:30:42.470 "trtype": "$TEST_TRANSPORT", 00:30:42.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:42.470 "adrfam": "ipv4", 00:30:42.470 "trsvcid": "$NVMF_PORT", 00:30:42.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:42.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:42.470 "hdgst": ${hdgst:-false}, 00:30:42.470 "ddgst": ${ddgst:-false} 00:30:42.470 }, 00:30:42.470 "method": "bdev_nvme_attach_controller" 00:30:42.470 } 00:30:42.470 EOF 00:30:42.470 )") 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:42.470 14:20:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:42.470 "params": { 00:30:42.470 "name": "Nvme0", 00:30:42.470 "trtype": "tcp", 00:30:42.470 "traddr": "10.0.0.2", 00:30:42.470 "adrfam": "ipv4", 00:30:42.470 "trsvcid": "4420", 00:30:42.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:42.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:42.470 "hdgst": false, 00:30:42.470 "ddgst": false 00:30:42.470 }, 00:30:42.470 "method": "bdev_nvme_attach_controller" 00:30:42.470 }' 00:30:42.470 [2024-12-05 14:20:48.752439] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:30:42.470 [2024-12-05 14:20:48.752524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945119 ] 00:30:42.731 [2024-12-05 14:20:48.845814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.731 [2024-12-05 14:20:48.898634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.991 Running I/O for 1 seconds... 00:30:44.375 2072.00 IOPS, 129.50 MiB/s 00:30:44.375 Latency(us) 00:30:44.375 [2024-12-05T13:20:50.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.375 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:44.375 Verification LBA range: start 0x0 length 0x400 00:30:44.375 Nvme0n1 : 1.01 2113.72 132.11 0.00 0.00 29589.58 3181.23 37355.52 00:30:44.375 [2024-12-05T13:20:50.675Z] =================================================================================================================== 00:30:44.375 [2024-12-05T13:20:50.675Z] Total : 2113.72 132.11 0.00 0.00 29589.58 3181.23 37355.52 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.375 rmmod nvme_tcp 00:30:44.375 rmmod nvme_fabrics 00:30:44.375 rmmod nvme_keyring 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2944399 ']' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2944399 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2944399 ']' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2944399 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2944399 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2944399' 00:30:44.375 killing process with pid 2944399 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2944399 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2944399 00:30:44.375 [2024-12-05 14:20:50.605209] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.375 14:20:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:46.918 00:30:46.918 real 0m14.782s 00:30:46.918 user 0m19.812s 00:30:46.918 sys 0m7.683s 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:46.918 ************************************ 00:30:46.918 END TEST nvmf_host_management 00:30:46.918 ************************************ 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:46.918 ************************************ 00:30:46.918 START TEST nvmf_lvol 00:30:46.918 ************************************ 00:30:46.918 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:46.918 * Looking for test storage... 00:30:46.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:46.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.919 --rc genhtml_branch_coverage=1 00:30:46.919 --rc genhtml_function_coverage=1 00:30:46.919 --rc genhtml_legend=1 00:30:46.919 --rc geninfo_all_blocks=1 00:30:46.919 --rc geninfo_unexecuted_blocks=1 00:30:46.919 00:30:46.919 ' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:46.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.919 --rc genhtml_branch_coverage=1 00:30:46.919 --rc genhtml_function_coverage=1 00:30:46.919 --rc genhtml_legend=1 00:30:46.919 --rc geninfo_all_blocks=1 00:30:46.919 --rc geninfo_unexecuted_blocks=1 00:30:46.919 00:30:46.919 ' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:46.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.919 --rc genhtml_branch_coverage=1 00:30:46.919 --rc genhtml_function_coverage=1 00:30:46.919 --rc genhtml_legend=1 00:30:46.919 --rc geninfo_all_blocks=1 00:30:46.919 --rc geninfo_unexecuted_blocks=1 00:30:46.919 00:30:46.919 ' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:46.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.919 --rc genhtml_branch_coverage=1 00:30:46.919 --rc genhtml_function_coverage=1 00:30:46.919 --rc genhtml_legend=1 00:30:46.919 --rc geninfo_all_blocks=1 00:30:46.919 --rc geninfo_unexecuted_blocks=1 00:30:46.919 00:30:46.919 ' 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.919 14:20:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:46.919 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.920 14:20:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:55.059 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:55.059 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.059 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:55.060 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:55.060 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:55.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:30:55.060 00:30:55.060 --- 10.0.0.2 ping statistics --- 00:30:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.060 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:30:55.060 00:30:55.060 --- 10.0.0.1 ping statistics --- 00:30:55.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.060 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2949482 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2949482 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2949482 ']' 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.060 14:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:55.060 [2024-12-05 14:21:00.561602] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:55.060 [2024-12-05 14:21:00.562720] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:30:55.060 [2024-12-05 14:21:00.562772] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:55.060 [2024-12-05 14:21:00.665387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:55.060 [2024-12-05 14:21:00.719482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:55.060 [2024-12-05 14:21:00.719536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:55.060 [2024-12-05 14:21:00.719545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:55.060 [2024-12-05 14:21:00.719553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:55.060 [2024-12-05 14:21:00.719560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:55.060 [2024-12-05 14:21:00.721451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.060 [2024-12-05 14:21:00.721598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.060 [2024-12-05 14:21:00.721597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.060 [2024-12-05 14:21:00.804735] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:55.060 [2024-12-05 14:21:00.805799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:55.060 [2024-12-05 14:21:00.806634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:55.060 [2024-12-05 14:21:00.806676] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:55.323 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:55.323 [2024-12-05 14:21:01.598790] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:55.584 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:55.584 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:55.584 14:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:55.845 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:55.845 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:56.106 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:56.368 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d41ffc84-2e21-43b0-a3cf-cb375d8fcbb2 00:30:56.368 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d41ffc84-2e21-43b0-a3cf-cb375d8fcbb2 lvol 20 00:30:56.368 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=0d85ef3c-5922-48dc-a5e3-a2fdd626e550 00:30:56.368 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:56.629 14:21:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d85ef3c-5922-48dc-a5e3-a2fdd626e550 00:30:56.890 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.890 [2024-12-05 14:21:03.182708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.154 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.154 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2950268 00:30:57.154 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:57.154 14:21:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:58.540 14:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0d85ef3c-5922-48dc-a5e3-a2fdd626e550 MY_SNAPSHOT 00:30:58.540 14:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=62c5110b-be6d-4938-aa4e-1f8addafa4af 00:30:58.540 14:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0d85ef3c-5922-48dc-a5e3-a2fdd626e550 30 00:30:58.800 14:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 62c5110b-be6d-4938-aa4e-1f8addafa4af MY_CLONE 00:30:59.061 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a2cad752-97ab-4ec8-b94f-a4915c967121 00:30:59.061 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a2cad752-97ab-4ec8-b94f-a4915c967121 00:30:59.321 14:21:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2950268 00:31:09.366 Initializing NVMe Controllers 00:31:09.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:09.366 Controller IO queue size 128, less than required. 00:31:09.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:09.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:09.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:09.366 Initialization complete. Launching workers. 00:31:09.366 ======================================================== 00:31:09.366 Latency(us) 00:31:09.366 Device Information : IOPS MiB/s Average min max 00:31:09.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15271.00 59.65 8383.21 1870.93 96010.77 00:31:09.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15177.00 59.29 8433.87 4167.61 71664.59 00:31:09.366 ======================================================== 00:31:09.366 Total : 30448.00 118.94 8408.46 1870.93 96010.77 00:31:09.366 00:31:09.366 14:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:09.366 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d85ef3c-5922-48dc-a5e3-a2fdd626e550 00:31:09.366 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d41ffc84-2e21-43b0-a3cf-cb375d8fcbb2 00:31:09.366 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:09.366 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:09.366 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.367 rmmod nvme_tcp 00:31:09.367 rmmod nvme_fabrics 00:31:09.367 rmmod nvme_keyring 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2949482 ']' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2949482 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2949482 ']' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2949482 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949482 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949482' 00:31:09.367 killing process with pid 2949482 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2949482 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2949482 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.367 14:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.746 00:31:10.746 real 0m24.019s 00:31:10.746 user 0m56.379s 00:31:10.746 sys 0m10.921s 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:10.746 ************************************ 00:31:10.746 END TEST nvmf_lvol 00:31:10.746 ************************************ 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.746 ************************************ 00:31:10.746 START TEST nvmf_lvs_grow 00:31:10.746 ************************************ 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:10.746 * Looking for test storage... 00:31:10.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.746 14:21:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:10.746 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:31:10.746 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:11.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.007 --rc genhtml_branch_coverage=1 00:31:11.007 --rc genhtml_function_coverage=1 00:31:11.007 --rc genhtml_legend=1 00:31:11.007 --rc geninfo_all_blocks=1 00:31:11.007 --rc geninfo_unexecuted_blocks=1 00:31:11.007 00:31:11.007 ' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:11.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.007 --rc genhtml_branch_coverage=1 00:31:11.007 --rc genhtml_function_coverage=1 00:31:11.007 --rc genhtml_legend=1 00:31:11.007 --rc geninfo_all_blocks=1 00:31:11.007 --rc geninfo_unexecuted_blocks=1 00:31:11.007 00:31:11.007 ' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:11.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.007 --rc genhtml_branch_coverage=1 00:31:11.007 --rc genhtml_function_coverage=1 00:31:11.007 --rc genhtml_legend=1 00:31:11.007 --rc geninfo_all_blocks=1 00:31:11.007 --rc geninfo_unexecuted_blocks=1 00:31:11.007 00:31:11.007 ' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:11.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.007 --rc genhtml_branch_coverage=1 00:31:11.007 --rc genhtml_function_coverage=1 00:31:11.007 --rc genhtml_legend=1 00:31:11.007 --rc geninfo_all_blocks=1 00:31:11.007 --rc geninfo_unexecuted_blocks=1 00:31:11.007 00:31:11.007 ' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.007 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.008 14:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.153 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:19.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:19.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:19.154 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:19.154 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:31:19.154 00:31:19.154 --- 10.0.0.2 ping statistics --- 00:31:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.154 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:31:19.154 00:31:19.154 --- 10.0.0.1 ping statistics --- 00:31:19.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.154 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.154 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2956953 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2956953 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2956953 ']' 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:19.155 14:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.155 [2024-12-05 14:21:24.699284] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:19.155 [2024-12-05 14:21:24.700426] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:31:19.155 [2024-12-05 14:21:24.700482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.155 [2024-12-05 14:21:24.799323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:19.155 [2024-12-05 14:21:24.854244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.155 [2024-12-05 14:21:24.854299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.155 [2024-12-05 14:21:24.854308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.155 [2024-12-05 14:21:24.854315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.155 [2024-12-05 14:21:24.854322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.155 [2024-12-05 14:21:24.855067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.155 [2024-12-05 14:21:24.933568] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:19.155 [2024-12-05 14:21:24.933847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.415 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:19.675 [2024-12-05 14:21:25.723975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:19.675 ************************************ 00:31:19.675 START TEST lvs_grow_clean 00:31:19.675 ************************************ 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:19.675 14:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:19.936 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:19.936 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:20.196 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:20.196 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:20.196 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:20.196 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:20.196 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:20.196 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 lvol 150 00:31:20.456 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8679ac63-c2c3-4023-93b1-e28154d2087d 00:31:20.456 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:20.456 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:20.717 [2024-12-05 14:21:26.763647] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:20.717 [2024-12-05 14:21:26.763802] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:20.717 true 00:31:20.717 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:20.717 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:20.717 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:20.717 14:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:20.980 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8679ac63-c2c3-4023-93b1-e28154d2087d 00:31:21.241 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:21.241 [2024-12-05 14:21:27.492312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.241 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2957459 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2957459 /var/tmp/bdevperf.sock 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2957459 ']' 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:21.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:21.503 14:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:21.503 [2024-12-05 14:21:27.747060] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:31:21.503 [2024-12-05 14:21:27.747130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2957459 ] 00:31:21.765 [2024-12-05 14:21:27.841126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.766 [2024-12-05 14:21:27.893117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.338 14:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:22.338 14:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:22.338 14:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:22.599 Nvme0n1 00:31:22.599 14:21:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:22.860 [ 00:31:22.860 { 00:31:22.860 "name": "Nvme0n1", 00:31:22.860 "aliases": [ 00:31:22.860 "8679ac63-c2c3-4023-93b1-e28154d2087d" 00:31:22.860 ], 00:31:22.860 "product_name": "NVMe disk", 00:31:22.860 "block_size": 4096, 00:31:22.860 "num_blocks": 38912, 00:31:22.860 "uuid": "8679ac63-c2c3-4023-93b1-e28154d2087d", 00:31:22.860 "numa_id": 0, 00:31:22.860 "assigned_rate_limits": { 00:31:22.860 "rw_ios_per_sec": 0, 00:31:22.860 "rw_mbytes_per_sec": 0, 00:31:22.860 "r_mbytes_per_sec": 0, 00:31:22.860 "w_mbytes_per_sec": 0 00:31:22.860 }, 00:31:22.860 "claimed": false, 00:31:22.860 "zoned": false, 00:31:22.860 "supported_io_types": { 00:31:22.860 "read": true, 00:31:22.860 "write": true, 00:31:22.860 "unmap": true, 00:31:22.860 "flush": true, 00:31:22.860 "reset": true, 00:31:22.860 "nvme_admin": true, 00:31:22.860 "nvme_io": true, 00:31:22.860 "nvme_io_md": false, 00:31:22.860 "write_zeroes": true, 00:31:22.860 "zcopy": false, 00:31:22.860 "get_zone_info": false, 00:31:22.860 "zone_management": false, 00:31:22.860 "zone_append": false, 00:31:22.860 "compare": true, 00:31:22.860 "compare_and_write": true, 00:31:22.860 "abort": true, 00:31:22.860 "seek_hole": false, 00:31:22.860 "seek_data": false, 00:31:22.860 "copy": true, 00:31:22.860 "nvme_iov_md": false 00:31:22.860 }, 00:31:22.860 "memory_domains": [ 00:31:22.860 { 00:31:22.860 "dma_device_id": "system", 00:31:22.860 "dma_device_type": 1 00:31:22.860 } 00:31:22.860 ], 00:31:22.860 "driver_specific": { 00:31:22.860 "nvme": [ 00:31:22.860 { 00:31:22.860 "trid": { 00:31:22.860 "trtype": "TCP", 00:31:22.860 "adrfam": "IPv4", 00:31:22.860 "traddr": "10.0.0.2", 00:31:22.860 "trsvcid": "4420", 00:31:22.860 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:22.860 }, 00:31:22.860 "ctrlr_data": { 00:31:22.860 "cntlid": 1, 00:31:22.860 "vendor_id": "0x8086", 00:31:22.860 "model_number": "SPDK bdev Controller", 00:31:22.860 "serial_number": "SPDK0", 00:31:22.860 "firmware_revision": "25.01", 00:31:22.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.860 "oacs": { 00:31:22.860 "security": 0, 00:31:22.860 "format": 0, 00:31:22.860 "firmware": 0, 00:31:22.860 "ns_manage": 0 00:31:22.860 }, 00:31:22.860 "multi_ctrlr": true, 00:31:22.860 "ana_reporting": false 00:31:22.860 }, 00:31:22.860 "vs": { 00:31:22.860 "nvme_version": "1.3" 00:31:22.860 }, 00:31:22.860 "ns_data": { 00:31:22.860 "id": 1, 00:31:22.860 "can_share": true 00:31:22.860 } 00:31:22.860 } 00:31:22.860 ], 00:31:22.860 "mp_policy": "active_passive" 00:31:22.860 } 00:31:22.860 } 00:31:22.860 ] 00:31:22.860 14:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2957791 00:31:22.860 14:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:22.860 14:21:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:22.860 Running I/O for 10 seconds... 00:31:24.249 Latency(us) 00:31:24.249 [2024-12-05T13:21:30.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:24.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:24.249 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:31:24.249 [2024-12-05T13:21:30.549Z] =================================================================================================================== 00:31:24.249 [2024-12-05T13:21:30.549Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:31:24.249 00:31:24.826 14:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:25.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.087 Nvme0n1 : 2.00 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:31:25.087 [2024-12-05T13:21:31.387Z] =================================================================================================================== 00:31:25.087 [2024-12-05T13:21:31.387Z] Total : 17018.00 66.48 0.00 0.00 0.00 0.00 0.00 00:31:25.087 00:31:25.087 true 00:31:25.087 14:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:25.087 14:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:25.347 14:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:25.348 14:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:25.348 14:21:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2957791 00:31:25.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.920 Nvme0n1 : 3.00 17314.33 67.63 0.00 0.00 0.00 0.00 0.00 00:31:25.920 [2024-12-05T13:21:32.220Z] =================================================================================================================== 00:31:25.920 [2024-12-05T13:21:32.220Z] Total : 17314.33 67.63 0.00 0.00 0.00 0.00 0.00 00:31:25.920 00:31:26.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.861 Nvme0n1 : 4.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:31:26.861 [2024-12-05T13:21:33.161Z] =================================================================================================================== 00:31:26.861 [2024-12-05T13:21:33.161Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:31:26.861 00:31:28.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:28.241 Nvme0n1 : 5.00 19100.80 74.61 0.00 0.00 0.00 0.00 0.00 00:31:28.241 [2024-12-05T13:21:34.541Z] =================================================================================================================== 00:31:28.241 [2024-12-05T13:21:34.541Z] Total : 19100.80 74.61 0.00 0.00 0.00 0.00 0.00 00:31:28.241 00:31:29.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.182 Nvme0n1 : 6.00 20171.83 78.80 0.00 0.00 0.00 0.00 0.00 00:31:29.182 [2024-12-05T13:21:35.482Z] =================================================================================================================== 00:31:29.182 [2024-12-05T13:21:35.482Z] Total : 20171.83 78.80 0.00 0.00 0.00 0.00 0.00 00:31:29.182 00:31:30.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:30.125 Nvme0n1 : 7.00 20936.86 81.78 0.00 0.00 0.00 0.00 0.00 00:31:30.125 [2024-12-05T13:21:36.425Z] =================================================================================================================== 00:31:30.125 [2024-12-05T13:21:36.425Z] Total : 20936.86 81.78 0.00 0.00 0.00 0.00 0.00 00:31:30.125 00:31:31.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:31.069 Nvme0n1 : 8.00 21510.62 84.03 0.00 0.00 0.00 0.00 0.00 00:31:31.069 [2024-12-05T13:21:37.369Z] =================================================================================================================== 00:31:31.069 [2024-12-05T13:21:37.369Z] Total : 21510.62 84.03 0.00 0.00 0.00 0.00 0.00 00:31:31.069 00:31:32.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.013 Nvme0n1 : 9.00 21956.89 85.77 0.00 0.00 0.00 0.00 0.00 00:31:32.013 [2024-12-05T13:21:38.313Z] =================================================================================================================== 00:31:32.013 [2024-12-05T13:21:38.313Z] Total : 21956.89 85.77 0.00 0.00 0.00 0.00 0.00 00:31:32.013 00:31:32.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.957 Nvme0n1 : 10.00 22313.90 87.16 0.00 0.00 0.00 0.00 0.00 00:31:32.957 [2024-12-05T13:21:39.257Z] =================================================================================================================== 00:31:32.957 [2024-12-05T13:21:39.257Z] Total : 22313.90 87.16 0.00 0.00 0.00 0.00 0.00 00:31:32.957 00:31:32.957 00:31:32.957 Latency(us) 00:31:32.957 [2024-12-05T13:21:39.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:32.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:32.957 Nvme0n1 : 10.00 22322.34 87.20 0.00 0.00 5731.28 4614.83 32768.00 00:31:32.957 [2024-12-05T13:21:39.257Z] =================================================================================================================== 00:31:32.957 [2024-12-05T13:21:39.257Z] Total : 22322.34 87.20 0.00 0.00 5731.28 4614.83 32768.00 00:31:32.957 { 00:31:32.957 "results": [ 00:31:32.957 { 00:31:32.957 "job": "Nvme0n1", 00:31:32.957 "core_mask": "0x2", 00:31:32.957 "workload": "randwrite", 00:31:32.957 "status": "finished", 00:31:32.957 "queue_depth": 128, 00:31:32.957 "io_size": 4096, 00:31:32.957 "runtime": 10.001954, 00:31:32.957 "iops": 22322.338215112766, 00:31:32.957 "mibps": 87.19663365278424, 00:31:32.957 "io_failed": 0, 00:31:32.957 "io_timeout": 0, 00:31:32.957 "avg_latency_us": 5731.278593373256, 00:31:32.957 "min_latency_us": 4614.826666666667, 00:31:32.957 "max_latency_us": 32768.0 00:31:32.957 } 00:31:32.957 ], 00:31:32.957 "core_count": 1 00:31:32.957 } 00:31:32.957 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2957459 00:31:32.957 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2957459 ']' 00:31:32.957 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2957459 00:31:32.957 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:32.957 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.957 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957459 00:31:33.219 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:33.219 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:33.219 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957459' 00:31:33.219 killing process with pid 2957459 00:31:33.219 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2957459 00:31:33.219 Received shutdown signal, test time was about 10.000000 seconds 00:31:33.219 00:31:33.219 Latency(us) 00:31:33.219 [2024-12-05T13:21:39.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:33.219 [2024-12-05T13:21:39.519Z] =================================================================================================================== 00:31:33.219 [2024-12-05T13:21:39.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:33.219 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2957459 00:31:33.219 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.480 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:33.480 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:33.480 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:33.742 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:33.742 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:33.742 14:21:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:33.742 [2024-12-05 14:21:40.035738] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:34.003 request: 00:31:34.003 { 00:31:34.003 "uuid": "fc34c7f8-d780-47cf-8781-2fac6fe2a910", 00:31:34.003 "method": "bdev_lvol_get_lvstores", 00:31:34.003 "req_id": 1 00:31:34.003 } 00:31:34.003 Got JSON-RPC error response 00:31:34.003 response: 00:31:34.003 { 00:31:34.003 "code": -19, 00:31:34.003 "message": "No such device" 00:31:34.003 } 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:34.003 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:34.265 aio_bdev 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8679ac63-c2c3-4023-93b1-e28154d2087d 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8679ac63-c2c3-4023-93b1-e28154d2087d 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:34.265 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:34.527 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8679ac63-c2c3-4023-93b1-e28154d2087d -t 2000 00:31:34.527 [ 00:31:34.527 { 00:31:34.527 "name": "8679ac63-c2c3-4023-93b1-e28154d2087d", 00:31:34.527 "aliases": [ 00:31:34.527 "lvs/lvol" 00:31:34.527 ], 00:31:34.527 "product_name": "Logical Volume", 00:31:34.527 "block_size": 4096, 00:31:34.527 "num_blocks": 38912, 00:31:34.527 "uuid": "8679ac63-c2c3-4023-93b1-e28154d2087d", 00:31:34.527 "assigned_rate_limits": { 00:31:34.527 "rw_ios_per_sec": 0, 00:31:34.527 "rw_mbytes_per_sec": 0, 00:31:34.527 "r_mbytes_per_sec": 0, 00:31:34.527 "w_mbytes_per_sec": 0 00:31:34.527 }, 00:31:34.527 "claimed": false, 00:31:34.527 "zoned": false, 00:31:34.527 "supported_io_types": { 00:31:34.527 "read": true, 00:31:34.527 "write": true, 00:31:34.527 "unmap": true, 00:31:34.527 "flush": false, 00:31:34.527 "reset": true, 00:31:34.527 "nvme_admin": false, 00:31:34.527 "nvme_io": false, 00:31:34.527 "nvme_io_md": false, 00:31:34.527 "write_zeroes": true, 00:31:34.527 "zcopy": false, 00:31:34.527 "get_zone_info": false, 00:31:34.527 "zone_management": false, 00:31:34.527 "zone_append": false, 00:31:34.527 "compare": false, 00:31:34.527 "compare_and_write": false, 00:31:34.527 "abort": false, 00:31:34.527 "seek_hole": true, 00:31:34.527 "seek_data": true, 00:31:34.527 "copy": false, 00:31:34.527 "nvme_iov_md": false 00:31:34.527 }, 00:31:34.527 "driver_specific": { 00:31:34.527 "lvol": { 00:31:34.527 "lvol_store_uuid": "fc34c7f8-d780-47cf-8781-2fac6fe2a910", 00:31:34.527 "base_bdev": "aio_bdev", 00:31:34.527 "thin_provision": false, 00:31:34.527 "num_allocated_clusters": 38, 00:31:34.527 "snapshot": false, 00:31:34.527 "clone": false, 00:31:34.527 "esnap_clone": false 00:31:34.527 } 00:31:34.527 } 00:31:34.527 } 00:31:34.527 ] 00:31:34.788 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:34.788 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:34.788 14:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:34.788 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:34.788 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:34.788 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:35.051 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:35.051 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8679ac63-c2c3-4023-93b1-e28154d2087d 00:31:35.313 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc34c7f8-d780-47cf-8781-2fac6fe2a910 00:31:35.313 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:35.574 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:35.575 00:31:35.575 real 0m15.946s 00:31:35.575 user 0m15.590s 00:31:35.575 sys 0m1.451s 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:35.575 ************************************ 00:31:35.575 END TEST lvs_grow_clean 00:31:35.575 ************************************ 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:35.575 ************************************ 00:31:35.575 START TEST lvs_grow_dirty 00:31:35.575 ************************************ 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:35.575 14:21:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:35.837 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:35.837 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:36.098 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:36.098 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:36.098 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:36.098 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:36.098 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:36.098 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f10fed25-9ce7-4226-9582-df9fb28e779c lvol 150 00:31:36.359 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=78f5eab6-18c5-4d4f-98df-13327f034918 00:31:36.359 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:36.359 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:36.639 [2024-12-05 14:21:42.755666] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:36.639 [2024-12-05 14:21:42.755831] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:36.639 true 00:31:36.639 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:36.639 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:36.901 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:36.901 14:21:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:36.901 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 78f5eab6-18c5-4d4f-98df-13327f034918 00:31:37.161 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:37.422 [2024-12-05 14:21:43.476238] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2960528 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2960528 /var/tmp/bdevperf.sock 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2960528 ']' 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:37.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:37.422 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:37.422 [2024-12-05 14:21:43.699696] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:31:37.422 [2024-12-05 14:21:43.699763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2960528 ] 00:31:37.683 [2024-12-05 14:21:43.786533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.683 [2024-12-05 14:21:43.820668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.683 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.683 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:37.683 14:21:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:37.943 Nvme0n1 00:31:37.943 14:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:38.204 [ 00:31:38.204 { 00:31:38.204 "name": "Nvme0n1", 00:31:38.204 "aliases": [ 00:31:38.204 "78f5eab6-18c5-4d4f-98df-13327f034918" 00:31:38.204 ], 00:31:38.204 "product_name": "NVMe disk", 00:31:38.204 "block_size": 4096, 00:31:38.204 "num_blocks": 38912, 00:31:38.204 "uuid": "78f5eab6-18c5-4d4f-98df-13327f034918", 00:31:38.204 "numa_id": 0, 00:31:38.204 "assigned_rate_limits": { 00:31:38.204 "rw_ios_per_sec": 0, 00:31:38.204 "rw_mbytes_per_sec": 0, 00:31:38.204 "r_mbytes_per_sec": 0, 00:31:38.204 "w_mbytes_per_sec": 0 00:31:38.204 }, 00:31:38.204 "claimed": false, 00:31:38.204 "zoned": false, 00:31:38.204 "supported_io_types": { 00:31:38.204 "read": true, 00:31:38.204 "write": true, 00:31:38.204 "unmap": true, 00:31:38.204 "flush": true, 00:31:38.204 "reset": true, 00:31:38.204 "nvme_admin": true, 00:31:38.204 "nvme_io": true, 00:31:38.204 "nvme_io_md": false, 00:31:38.204 "write_zeroes": true, 00:31:38.204 "zcopy": false, 00:31:38.204 "get_zone_info": false, 00:31:38.204 "zone_management": false, 00:31:38.204 "zone_append": false, 00:31:38.204 "compare": true, 00:31:38.204 "compare_and_write": true, 00:31:38.204 "abort": true, 00:31:38.204 "seek_hole": false, 00:31:38.204 "seek_data": false, 00:31:38.204 "copy": true, 00:31:38.204 "nvme_iov_md": false 00:31:38.204 }, 00:31:38.204 "memory_domains": [ 00:31:38.204 { 00:31:38.204 "dma_device_id": "system", 00:31:38.204 "dma_device_type": 1 00:31:38.204 } 00:31:38.204 ], 00:31:38.204 "driver_specific": { 00:31:38.204 "nvme": [ 00:31:38.204 { 00:31:38.204 "trid": { 00:31:38.204 "trtype": "TCP", 00:31:38.204 "adrfam": "IPv4", 00:31:38.204 "traddr": "10.0.0.2", 00:31:38.204 "trsvcid": "4420", 00:31:38.204 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:38.204 }, 00:31:38.204 "ctrlr_data": { 00:31:38.204 "cntlid": 1, 00:31:38.204 "vendor_id": "0x8086", 00:31:38.204 "model_number": "SPDK bdev Controller", 00:31:38.204 "serial_number": "SPDK0", 00:31:38.204 "firmware_revision": "25.01", 00:31:38.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.204 "oacs": { 00:31:38.204 "security": 0, 00:31:38.204 "format": 0, 00:31:38.204 "firmware": 0, 00:31:38.204 "ns_manage": 0 00:31:38.204 }, 00:31:38.204 "multi_ctrlr": true, 00:31:38.204 "ana_reporting": false 00:31:38.204 }, 00:31:38.204 "vs": { 00:31:38.204 "nvme_version": "1.3" 00:31:38.204 }, 00:31:38.204 "ns_data": { 00:31:38.204 "id": 1, 00:31:38.204 "can_share": true 00:31:38.204 } 00:31:38.204 } 00:31:38.204 ], 00:31:38.204 "mp_policy": "active_passive" 00:31:38.204 } 00:31:38.204 } 00:31:38.204 ] 00:31:38.204 14:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2960651 00:31:38.204 14:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:38.205 14:21:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:38.205 Running I/O for 10 seconds... 00:31:39.145 Latency(us) 00:31:39.145 [2024-12-05T13:21:45.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:39.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:39.145 Nvme0n1 : 1.00 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:31:39.145 [2024-12-05T13:21:45.445Z] =================================================================================================================== 00:31:39.145 [2024-12-05T13:21:45.445Z] Total : 17536.00 68.50 0.00 0.00 0.00 0.00 0.00 00:31:39.145 00:31:40.084 14:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:40.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:40.344 Nvme0n1 : 2.00 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:31:40.344 [2024-12-05T13:21:46.644Z] =================================================================================================================== 00:31:40.344 [2024-12-05T13:21:46.644Z] Total : 17785.00 69.47 0.00 0.00 0.00 0.00 0.00 00:31:40.344 00:31:40.344 true 00:31:40.344 14:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:40.344 14:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:40.604 14:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:40.604 14:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:40.604 14:21:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2960651 00:31:41.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:41.190 Nvme0n1 : 3.00 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:31:41.190 [2024-12-05T13:21:47.490Z] =================================================================================================================== 00:31:41.190 [2024-12-05T13:21:47.490Z] Total : 17868.00 69.80 0.00 0.00 0.00 0.00 0.00 00:31:41.190 00:31:42.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:42.177 Nvme0n1 : 4.00 17934.00 70.05 0.00 0.00 0.00 0.00 0.00 00:31:42.177 [2024-12-05T13:21:48.477Z] =================================================================================================================== 00:31:42.177 [2024-12-05T13:21:48.477Z] Total : 17934.00 70.05 0.00 0.00 0.00 0.00 0.00 00:31:42.177 00:31:43.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:43.117 Nvme0n1 : 5.00 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:31:43.117 [2024-12-05T13:21:49.417Z] =================================================================================================================== 00:31:43.117 [2024-12-05T13:21:49.417Z] Total : 18798.00 73.43 0.00 0.00 0.00 0.00 0.00 00:31:43.117 00:31:44.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:44.497 Nvme0n1 : 6.00 19898.33 77.73 0.00 0.00 0.00 0.00 0.00 00:31:44.497 [2024-12-05T13:21:50.797Z] =================================================================================================================== 00:31:44.497 [2024-12-05T13:21:50.797Z] Total : 19898.33 77.73 0.00 0.00 0.00 0.00 0.00 00:31:44.497 00:31:45.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.437 Nvme0n1 : 7.00 20702.43 80.87 0.00 0.00 0.00 0.00 0.00 00:31:45.437 [2024-12-05T13:21:51.737Z] =================================================================================================================== 00:31:45.437 [2024-12-05T13:21:51.737Z] Total : 20702.43 80.87 0.00 0.00 0.00 0.00 0.00 00:31:45.437 00:31:46.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.375 Nvme0n1 : 8.00 21313.75 83.26 0.00 0.00 0.00 0.00 0.00 00:31:46.375 [2024-12-05T13:21:52.675Z] =================================================================================================================== 00:31:46.375 [2024-12-05T13:21:52.675Z] Total : 21313.75 83.26 0.00 0.00 0.00 0.00 0.00 00:31:46.375 00:31:47.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.313 Nvme0n1 : 9.00 21782.11 85.09 0.00 0.00 0.00 0.00 0.00 00:31:47.313 [2024-12-05T13:21:53.613Z] =================================================================================================================== 00:31:47.313 [2024-12-05T13:21:53.613Z] Total : 21782.11 85.09 0.00 0.00 0.00 0.00 0.00 00:31:47.313 00:31:48.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.253 Nvme0n1 : 10.00 22163.00 86.57 0.00 0.00 0.00 0.00 0.00 00:31:48.253 [2024-12-05T13:21:54.553Z] =================================================================================================================== 00:31:48.253 [2024-12-05T13:21:54.553Z] Total : 22163.00 86.57 0.00 0.00 0.00 0.00 0.00 00:31:48.253 00:31:48.253 00:31:48.253 Latency(us) 00:31:48.253 [2024-12-05T13:21:54.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.253 Nvme0n1 : 10.00 22167.18 86.59 0.00 0.00 5771.55 3031.04 28180.48 00:31:48.253 [2024-12-05T13:21:54.553Z] =================================================================================================================== 00:31:48.253 [2024-12-05T13:21:54.553Z] Total : 22167.18 86.59 0.00 0.00 5771.55 3031.04 28180.48 00:31:48.253 { 00:31:48.253 "results": [ 00:31:48.253 { 00:31:48.253 "job": "Nvme0n1", 00:31:48.253 "core_mask": "0x2", 00:31:48.253 "workload": "randwrite", 00:31:48.253 "status": "finished", 00:31:48.253 "queue_depth": 128, 00:31:48.253 "io_size": 4096, 00:31:48.253 "runtime": 10.003888, 00:31:48.253 "iops": 22167.18139987173, 00:31:48.253 "mibps": 86.59055234324894, 00:31:48.253 "io_failed": 0, 00:31:48.253 "io_timeout": 0, 00:31:48.253 "avg_latency_us": 5771.549364382195, 00:31:48.253 "min_latency_us": 3031.04, 00:31:48.253 "max_latency_us": 28180.48 00:31:48.253 } 00:31:48.253 ], 00:31:48.253 "core_count": 1 00:31:48.253 } 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2960528 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2960528 ']' 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2960528 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960528 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960528' 00:31:48.253 killing process with pid 2960528 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2960528 00:31:48.253 Received shutdown signal, test time was about 10.000000 seconds 00:31:48.253 00:31:48.253 Latency(us) 00:31:48.253 [2024-12-05T13:21:54.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:48.253 [2024-12-05T13:21:54.553Z] =================================================================================================================== 00:31:48.253 [2024-12-05T13:21:54.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:48.253 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2960528 00:31:48.513 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:48.513 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.772 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:48.772 14:21:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2956953 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2956953 00:31:49.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2956953 Killed "${NVMF_APP[@]}" "$@" 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2962732 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2962732 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2962732 ']' 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.033 14:21:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:49.033 [2024-12-05 14:21:55.282657] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:49.033 [2024-12-05 14:21:55.283776] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:31:49.033 [2024-12-05 14:21:55.283829] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:49.292 [2024-12-05 14:21:55.380765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.292 [2024-12-05 14:21:55.414211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:49.292 [2024-12-05 14:21:55.414241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:49.292 [2024-12-05 14:21:55.414247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:49.292 [2024-12-05 14:21:55.414255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:49.292 [2024-12-05 14:21:55.414259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:49.292 [2024-12-05 14:21:55.414736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.292 [2024-12-05 14:21:55.467769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:49.292 [2024-12-05 14:21:55.467953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.864 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:50.125 [2024-12-05 14:21:56.280901] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:50.125 [2024-12-05 14:21:56.281147] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:50.125 [2024-12-05 14:21:56.281237] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 78f5eab6-18c5-4d4f-98df-13327f034918 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=78f5eab6-18c5-4d4f-98df-13327f034918 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:50.125 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:50.386 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 78f5eab6-18c5-4d4f-98df-13327f034918 -t 2000 00:31:50.386 [ 00:31:50.386 { 00:31:50.386 "name": "78f5eab6-18c5-4d4f-98df-13327f034918", 00:31:50.386 "aliases": [ 00:31:50.386 "lvs/lvol" 00:31:50.386 ], 00:31:50.386 "product_name": "Logical Volume", 00:31:50.386 "block_size": 4096, 00:31:50.386 "num_blocks": 38912, 00:31:50.386 "uuid": "78f5eab6-18c5-4d4f-98df-13327f034918", 00:31:50.386 "assigned_rate_limits": { 00:31:50.386 "rw_ios_per_sec": 0, 00:31:50.386 "rw_mbytes_per_sec": 0, 00:31:50.386 "r_mbytes_per_sec": 0, 00:31:50.386 "w_mbytes_per_sec": 0 00:31:50.386 }, 00:31:50.386 "claimed": false, 00:31:50.386 "zoned": false, 00:31:50.386 "supported_io_types": { 00:31:50.386 "read": true, 00:31:50.386 "write": true, 00:31:50.386 "unmap": true, 00:31:50.386 "flush": false, 00:31:50.386 "reset": true, 00:31:50.386 "nvme_admin": false, 00:31:50.386 "nvme_io": false, 00:31:50.386 "nvme_io_md": false, 00:31:50.386 "write_zeroes": true, 00:31:50.386 "zcopy": false, 00:31:50.386 "get_zone_info": false, 00:31:50.386 "zone_management": false, 00:31:50.386 "zone_append": false, 00:31:50.386 "compare": false, 00:31:50.386 "compare_and_write": false, 00:31:50.386 "abort": false, 00:31:50.386 "seek_hole": true, 00:31:50.386 "seek_data": true, 00:31:50.386 "copy": false, 00:31:50.386 "nvme_iov_md": false 00:31:50.386 }, 00:31:50.386 "driver_specific": { 00:31:50.386 "lvol": { 00:31:50.386 "lvol_store_uuid": "f10fed25-9ce7-4226-9582-df9fb28e779c", 00:31:50.386 "base_bdev": "aio_bdev", 00:31:50.386 "thin_provision": false, 00:31:50.386 "num_allocated_clusters": 38, 00:31:50.386 "snapshot": false, 00:31:50.386 "clone": false, 00:31:50.386 "esnap_clone": false 00:31:50.386 } 00:31:50.386 } 00:31:50.386 } 00:31:50.386 ] 00:31:50.386 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:50.386 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:50.386 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:50.646 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:50.646 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:50.646 14:21:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:50.907 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:50.907 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:50.907 [2024-12-05 14:21:57.151218] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:51.168 request: 00:31:51.168 { 00:31:51.168 "uuid": "f10fed25-9ce7-4226-9582-df9fb28e779c", 00:31:51.168 "method": "bdev_lvol_get_lvstores", 00:31:51.168 "req_id": 1 00:31:51.168 } 00:31:51.168 Got JSON-RPC error response 00:31:51.168 response: 00:31:51.168 { 00:31:51.168 "code": -19, 00:31:51.168 "message": "No such device" 00:31:51.168 } 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:51.168 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:51.429 aio_bdev 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 78f5eab6-18c5-4d4f-98df-13327f034918 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=78f5eab6-18c5-4d4f-98df-13327f034918 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:51.429 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:51.688 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 78f5eab6-18c5-4d4f-98df-13327f034918 -t 2000 00:31:51.688 [ 00:31:51.688 { 00:31:51.688 "name": "78f5eab6-18c5-4d4f-98df-13327f034918", 00:31:51.688 "aliases": [ 00:31:51.688 "lvs/lvol" 00:31:51.688 ], 00:31:51.688 "product_name": "Logical Volume", 00:31:51.688 "block_size": 4096, 00:31:51.688 "num_blocks": 38912, 00:31:51.688 "uuid": "78f5eab6-18c5-4d4f-98df-13327f034918", 00:31:51.688 "assigned_rate_limits": { 00:31:51.688 "rw_ios_per_sec": 0, 00:31:51.688 "rw_mbytes_per_sec": 0, 00:31:51.688 "r_mbytes_per_sec": 0, 00:31:51.688 "w_mbytes_per_sec": 0 00:31:51.688 }, 00:31:51.688 "claimed": false, 00:31:51.688 "zoned": false, 00:31:51.688 "supported_io_types": { 00:31:51.688 "read": true, 00:31:51.688 "write": true, 00:31:51.688 "unmap": true, 00:31:51.688 "flush": false, 00:31:51.688 "reset": true, 00:31:51.688 "nvme_admin": false, 00:31:51.688 "nvme_io": false, 00:31:51.688 "nvme_io_md": false, 00:31:51.688 "write_zeroes": true, 00:31:51.688 "zcopy": false, 00:31:51.688 "get_zone_info": false, 00:31:51.688 "zone_management": false, 00:31:51.688 "zone_append": false, 00:31:51.688 "compare": false, 00:31:51.689 "compare_and_write": false, 00:31:51.689 "abort": false, 00:31:51.689 "seek_hole": true, 00:31:51.689 "seek_data": true, 00:31:51.689 "copy": false, 00:31:51.689 "nvme_iov_md": false 00:31:51.689 }, 00:31:51.689 "driver_specific": { 00:31:51.689 "lvol": { 00:31:51.689 "lvol_store_uuid": "f10fed25-9ce7-4226-9582-df9fb28e779c", 00:31:51.689 "base_bdev": "aio_bdev", 00:31:51.689 "thin_provision": false, 00:31:51.689 "num_allocated_clusters": 38, 00:31:51.689 "snapshot": false, 00:31:51.689 "clone": false, 00:31:51.689 "esnap_clone": false 00:31:51.689 } 00:31:51.689 } 00:31:51.689 } 00:31:51.689 ] 00:31:51.689 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:51.689 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:51.689 14:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:51.950 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:51.950 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:51.950 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:52.210 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:52.210 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 78f5eab6-18c5-4d4f-98df-13327f034918 00:31:52.210 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f10fed25-9ce7-4226-9582-df9fb28e779c 00:31:52.471 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:52.732 00:31:52.732 real 0m17.038s 00:31:52.732 user 0m33.154s 00:31:52.732 sys 0m4.708s 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:52.732 ************************************ 00:31:52.732 END TEST lvs_grow_dirty 00:31:52.732 ************************************ 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:52.732 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:52.732 nvmf_trace.0 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.733 14:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.733 rmmod nvme_tcp 00:31:52.733 rmmod nvme_fabrics 00:31:52.733 rmmod nvme_keyring 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2962732 ']' 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2962732 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2962732 ']' 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2962732 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2962732 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2962732' 00:31:52.995 killing process with pid 2962732 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2962732 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2962732 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.995 14:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.548 00:31:55.548 real 0m44.448s 00:31:55.548 user 0m51.752s 00:31:55.548 sys 0m12.358s 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:55.548 ************************************ 00:31:55.548 END TEST nvmf_lvs_grow 00:31:55.548 ************************************ 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:55.548 ************************************ 00:31:55.548 START TEST nvmf_bdev_io_wait 00:31:55.548 ************************************ 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:55.548 * Looking for test storage... 00:31:55.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:55.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.548 --rc genhtml_branch_coverage=1 00:31:55.548 --rc genhtml_function_coverage=1 00:31:55.548 --rc genhtml_legend=1 00:31:55.548 --rc geninfo_all_blocks=1 00:31:55.548 --rc geninfo_unexecuted_blocks=1 00:31:55.548 00:31:55.548 ' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:55.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.548 --rc genhtml_branch_coverage=1 00:31:55.548 --rc genhtml_function_coverage=1 00:31:55.548 --rc genhtml_legend=1 00:31:55.548 --rc geninfo_all_blocks=1 00:31:55.548 --rc geninfo_unexecuted_blocks=1 00:31:55.548 00:31:55.548 ' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:55.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.548 --rc genhtml_branch_coverage=1 00:31:55.548 --rc genhtml_function_coverage=1 00:31:55.548 --rc genhtml_legend=1 00:31:55.548 --rc geninfo_all_blocks=1 00:31:55.548 --rc geninfo_unexecuted_blocks=1 00:31:55.548 00:31:55.548 ' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:55.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.548 --rc genhtml_branch_coverage=1 00:31:55.548 --rc genhtml_function_coverage=1 00:31:55.548 --rc genhtml_legend=1 00:31:55.548 --rc geninfo_all_blocks=1 00:31:55.548 --rc geninfo_unexecuted_blocks=1 00:31:55.548 00:31:55.548 ' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.548 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.549 14:22:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:03.696 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:03.696 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:03.696 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:03.696 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:03.697 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.697 14:22:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:32:03.697 00:32:03.697 --- 10.0.0.2 ping statistics --- 00:32:03.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.697 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:32:03.697 00:32:03.697 --- 10.0.0.1 ping statistics --- 00:32:03.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.697 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2967615 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2967615 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2967615 ']' 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.697 [2024-12-05 14:22:09.147544] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:03.697 [2024-12-05 14:22:09.148696] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:03.697 [2024-12-05 14:22:09.148745] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.697 [2024-12-05 14:22:09.247420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:03.697 [2024-12-05 14:22:09.301562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.697 [2024-12-05 14:22:09.301615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.697 [2024-12-05 14:22:09.301624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.697 [2024-12-05 14:22:09.301632] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.697 [2024-12-05 14:22:09.301638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.697 [2024-12-05 14:22:09.303559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.697 [2024-12-05 14:22:09.303724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.697 [2024-12-05 14:22:09.303884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:03.697 [2024-12-05 14:22:09.303885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.697 [2024-12-05 14:22:09.304368] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:03.697 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 14:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 [2024-12-05 14:22:10.079423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:03.960 [2024-12-05 14:22:10.080536] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:03.960 [2024-12-05 14:22:10.080543] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:03.960 [2024-12-05 14:22:10.080551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 [2024-12-05 14:22:10.092596] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 Malloc0 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:03.960 [2024-12-05 14:22:10.169012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2967955 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2967958 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:03.960 { 00:32:03.960 "params": { 00:32:03.960 "name": "Nvme$subsystem", 00:32:03.960 "trtype": "$TEST_TRANSPORT", 00:32:03.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.960 "adrfam": "ipv4", 00:32:03.960 "trsvcid": "$NVMF_PORT", 00:32:03.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.960 "hdgst": ${hdgst:-false}, 00:32:03.960 "ddgst": ${ddgst:-false} 00:32:03.960 }, 00:32:03.960 "method": "bdev_nvme_attach_controller" 00:32:03.960 } 00:32:03.960 EOF 00:32:03.960 )") 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2967961 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:03.960 { 00:32:03.960 "params": { 00:32:03.960 "name": "Nvme$subsystem", 00:32:03.960 "trtype": "$TEST_TRANSPORT", 00:32:03.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.960 "adrfam": "ipv4", 00:32:03.960 "trsvcid": "$NVMF_PORT", 00:32:03.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.960 "hdgst": ${hdgst:-false}, 00:32:03.960 "ddgst": ${ddgst:-false} 00:32:03.960 }, 00:32:03.960 "method": "bdev_nvme_attach_controller" 00:32:03.960 } 00:32:03.960 EOF 00:32:03.960 )") 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2967964 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:03.960 { 00:32:03.960 "params": { 00:32:03.960 "name": "Nvme$subsystem", 00:32:03.960 "trtype": "$TEST_TRANSPORT", 00:32:03.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.960 "adrfam": "ipv4", 00:32:03.960 "trsvcid": "$NVMF_PORT", 00:32:03.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.960 "hdgst": ${hdgst:-false}, 00:32:03.960 "ddgst": ${ddgst:-false} 00:32:03.960 }, 00:32:03.960 "method": "bdev_nvme_attach_controller" 00:32:03.960 } 00:32:03.960 EOF 00:32:03.960 )") 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:03.960 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:03.960 { 00:32:03.960 "params": { 00:32:03.960 "name": "Nvme$subsystem", 00:32:03.960 "trtype": "$TEST_TRANSPORT", 00:32:03.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.960 "adrfam": "ipv4", 00:32:03.960 "trsvcid": "$NVMF_PORT", 00:32:03.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.961 "hdgst": ${hdgst:-false}, 00:32:03.961 "ddgst": ${ddgst:-false} 00:32:03.961 }, 00:32:03.961 "method": "bdev_nvme_attach_controller" 00:32:03.961 } 00:32:03.961 EOF 00:32:03.961 )") 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2967955 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:03.961 "params": { 00:32:03.961 "name": "Nvme1", 00:32:03.961 "trtype": "tcp", 00:32:03.961 "traddr": "10.0.0.2", 00:32:03.961 "adrfam": "ipv4", 00:32:03.961 "trsvcid": "4420", 00:32:03.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.961 "hdgst": false, 00:32:03.961 "ddgst": false 00:32:03.961 }, 00:32:03.961 "method": "bdev_nvme_attach_controller" 00:32:03.961 }' 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:03.961 "params": { 00:32:03.961 "name": "Nvme1", 00:32:03.961 "trtype": "tcp", 00:32:03.961 "traddr": "10.0.0.2", 00:32:03.961 "adrfam": "ipv4", 00:32:03.961 "trsvcid": "4420", 00:32:03.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.961 "hdgst": false, 00:32:03.961 "ddgst": false 00:32:03.961 }, 00:32:03.961 "method": "bdev_nvme_attach_controller" 00:32:03.961 }' 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:03.961 "params": { 00:32:03.961 "name": "Nvme1", 00:32:03.961 "trtype": "tcp", 00:32:03.961 "traddr": "10.0.0.2", 00:32:03.961 "adrfam": "ipv4", 00:32:03.961 "trsvcid": "4420", 00:32:03.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.961 "hdgst": false, 00:32:03.961 "ddgst": false 00:32:03.961 }, 00:32:03.961 "method": "bdev_nvme_attach_controller" 00:32:03.961 }' 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:03.961 14:22:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:03.961 "params": { 00:32:03.961 "name": "Nvme1", 00:32:03.961 "trtype": "tcp", 00:32:03.961 "traddr": "10.0.0.2", 00:32:03.961 "adrfam": "ipv4", 00:32:03.961 "trsvcid": "4420", 00:32:03.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:03.961 "hdgst": false, 00:32:03.961 "ddgst": false 00:32:03.961 }, 00:32:03.961 "method": "bdev_nvme_attach_controller" 00:32:03.961 }' 00:32:03.961 [2024-12-05 14:22:10.228048] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:03.961 [2024-12-05 14:22:10.228126] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:03.961 [2024-12-05 14:22:10.230032] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:03.961 [2024-12-05 14:22:10.230100] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:03.961 [2024-12-05 14:22:10.231259] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:03.961 [2024-12-05 14:22:10.231324] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:03.961 [2024-12-05 14:22:10.233165] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:03.961 [2024-12-05 14:22:10.233235] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:04.222 [2024-12-05 14:22:10.454897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.222 [2024-12-05 14:22:10.495239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:04.482 [2024-12-05 14:22:10.545134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.482 [2024-12-05 14:22:10.586903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:04.482 [2024-12-05 14:22:10.614437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.482 [2024-12-05 14:22:10.651418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:04.482 [2024-12-05 14:22:10.682271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.482 [2024-12-05 14:22:10.719716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:04.482 Running I/O for 1 seconds... 00:32:04.743 Running I/O for 1 seconds... 00:32:04.743 Running I/O for 1 seconds... 00:32:04.743 Running I/O for 1 seconds... 00:32:05.684 10852.00 IOPS, 42.39 MiB/s 00:32:05.684 Latency(us) 00:32:05.684 [2024-12-05T13:22:11.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.684 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:05.684 Nvme1n1 : 1.01 10906.97 42.61 0.00 0.00 11691.48 2389.33 13871.79 00:32:05.684 [2024-12-05T13:22:11.984Z] =================================================================================================================== 00:32:05.684 [2024-12-05T13:22:11.984Z] Total : 10906.97 42.61 0.00 0.00 11691.48 2389.33 13871.79 00:32:05.684 180320.00 IOPS, 704.38 MiB/s 00:32:05.684 Latency(us) 00:32:05.684 [2024-12-05T13:22:11.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.684 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:05.684 Nvme1n1 : 1.00 179953.70 702.94 0.00 0.00 706.94 300.37 2034.35 00:32:05.684 [2024-12-05T13:22:11.984Z] =================================================================================================================== 00:32:05.684 [2024-12-05T13:22:11.984Z] Total : 179953.70 702.94 0.00 0.00 706.94 300.37 2034.35 00:32:05.684 11222.00 IOPS, 43.84 MiB/s 00:32:05.684 Latency(us) 00:32:05.684 [2024-12-05T13:22:11.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.684 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:05.684 Nvme1n1 : 1.01 11292.06 44.11 0.00 0.00 11295.67 2771.63 15291.73 00:32:05.684 [2024-12-05T13:22:11.984Z] =================================================================================================================== 00:32:05.684 [2024-12-05T13:22:11.984Z] Total : 11292.06 44.11 0.00 0.00 11295.67 2771.63 15291.73 00:32:05.684 9937.00 IOPS, 38.82 MiB/s 00:32:05.684 Latency(us) 00:32:05.684 [2024-12-05T13:22:11.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.684 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:05.684 Nvme1n1 : 1.01 9997.99 39.05 0.00 0.00 12759.59 4860.59 19442.35 00:32:05.684 [2024-12-05T13:22:11.984Z] =================================================================================================================== 00:32:05.684 [2024-12-05T13:22:11.984Z] Total : 9997.99 39.05 0.00 0.00 12759.59 4860.59 19442.35 00:32:05.685 14:22:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2967958 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2967961 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2967964 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:05.945 rmmod nvme_tcp 00:32:05.945 rmmod nvme_fabrics 00:32:05.945 rmmod nvme_keyring 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2967615 ']' 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2967615 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2967615 ']' 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2967615 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2967615 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:05.945 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:05.946 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2967615' 00:32:05.946 killing process with pid 2967615 00:32:05.946 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2967615 00:32:05.946 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2967615 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.206 14:22:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.119 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:08.119 00:32:08.119 real 0m12.988s 00:32:08.119 user 0m15.447s 00:32:08.119 sys 0m7.749s 00:32:08.119 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.119 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:08.119 ************************************ 00:32:08.119 END TEST nvmf_bdev_io_wait 00:32:08.119 ************************************ 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:08.380 ************************************ 00:32:08.380 START TEST nvmf_queue_depth 00:32:08.380 ************************************ 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:08.380 * Looking for test storage... 00:32:08.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:08.380 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:08.642 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:08.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.643 --rc genhtml_branch_coverage=1 00:32:08.643 --rc genhtml_function_coverage=1 00:32:08.643 --rc genhtml_legend=1 00:32:08.643 --rc geninfo_all_blocks=1 00:32:08.643 --rc geninfo_unexecuted_blocks=1 00:32:08.643 00:32:08.643 ' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:08.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.643 --rc genhtml_branch_coverage=1 00:32:08.643 --rc genhtml_function_coverage=1 00:32:08.643 --rc genhtml_legend=1 00:32:08.643 --rc geninfo_all_blocks=1 00:32:08.643 --rc geninfo_unexecuted_blocks=1 00:32:08.643 00:32:08.643 ' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:08.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.643 --rc genhtml_branch_coverage=1 00:32:08.643 --rc genhtml_function_coverage=1 00:32:08.643 --rc genhtml_legend=1 00:32:08.643 --rc geninfo_all_blocks=1 00:32:08.643 --rc geninfo_unexecuted_blocks=1 00:32:08.643 00:32:08.643 ' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:08.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.643 --rc genhtml_branch_coverage=1 00:32:08.643 --rc genhtml_function_coverage=1 00:32:08.643 --rc genhtml_legend=1 00:32:08.643 --rc geninfo_all_blocks=1 00:32:08.643 --rc geninfo_unexecuted_blocks=1 00:32:08.643 00:32:08.643 ' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:08.643 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:08.644 14:22:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:16.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:16.780 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.780 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:16.781 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:16.781 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.781 14:22:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:16.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:32:16.781 00:32:16.781 --- 10.0.0.2 ping statistics --- 00:32:16.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.781 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:32:16.781 00:32:16.781 --- 10.0.0.1 ping statistics --- 00:32:16.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.781 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2972340 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2972340 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2972340 ']' 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.781 14:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:16.781 [2024-12-05 14:22:22.312129] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:16.781 [2024-12-05 14:22:22.313261] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:16.781 [2024-12-05 14:22:22.313311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.781 [2024-12-05 14:22:22.416191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.781 [2024-12-05 14:22:22.466923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.781 [2024-12-05 14:22:22.466972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.781 [2024-12-05 14:22:22.466981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.781 [2024-12-05 14:22:22.466988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.781 [2024-12-05 14:22:22.466995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.781 [2024-12-05 14:22:22.467794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.781 [2024-12-05 14:22:22.545491] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:16.781 [2024-12-05 14:22:22.545760] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.042 [2024-12-05 14:22:23.180657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.042 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.043 Malloc0 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.043 [2024-12-05 14:22:23.272844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2972683 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2972683 /var/tmp/bdevperf.sock 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2972683 ']' 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.043 14:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:17.043 [2024-12-05 14:22:23.331603] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:17.043 [2024-12-05 14:22:23.331666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972683 ] 00:32:17.303 [2024-12-05 14:22:23.423341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.303 [2024-12-05 14:22:23.476424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.907 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.907 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:32:17.907 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:17.907 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.907 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:18.168 NVMe0n1 00:32:18.168 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.168 14:22:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:18.430 Running I/O for 10 seconds... 00:32:20.316 8199.00 IOPS, 32.03 MiB/s [2024-12-05T13:22:27.558Z] 8663.00 IOPS, 33.84 MiB/s [2024-12-05T13:22:28.503Z] 9134.33 IOPS, 35.68 MiB/s [2024-12-05T13:22:29.888Z] 10022.50 IOPS, 39.15 MiB/s [2024-12-05T13:22:30.829Z] 10683.80 IOPS, 41.73 MiB/s [2024-12-05T13:22:31.870Z] 11128.00 IOPS, 43.47 MiB/s [2024-12-05T13:22:32.811Z] 11492.86 IOPS, 44.89 MiB/s [2024-12-05T13:22:33.753Z] 11732.75 IOPS, 45.83 MiB/s [2024-12-05T13:22:34.697Z] 11932.78 IOPS, 46.61 MiB/s [2024-12-05T13:22:34.697Z] 12086.40 IOPS, 47.21 MiB/s 00:32:28.397 Latency(us) 00:32:28.397 [2024-12-05T13:22:34.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.397 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:28.398 Verification LBA range: start 0x0 length 0x4000 00:32:28.398 NVMe0n1 : 10.05 12130.71 47.39 0.00 0.00 84133.81 10813.44 75147.95 00:32:28.398 [2024-12-05T13:22:34.698Z] =================================================================================================================== 00:32:28.398 [2024-12-05T13:22:34.698Z] Total : 12130.71 47.39 0.00 0.00 84133.81 10813.44 75147.95 00:32:28.398 { 00:32:28.398 "results": [ 00:32:28.398 { 00:32:28.398 "job": "NVMe0n1", 00:32:28.398 "core_mask": "0x1", 00:32:28.398 "workload": "verify", 00:32:28.398 "status": "finished", 00:32:28.398 "verify_range": { 00:32:28.398 "start": 0, 00:32:28.398 "length": 16384 00:32:28.398 }, 00:32:28.398 "queue_depth": 1024, 00:32:28.398 "io_size": 4096, 00:32:28.398 "runtime": 10.047313, 00:32:28.398 "iops": 12130.705990745984, 00:32:28.398 "mibps": 47.3855702763515, 00:32:28.398 "io_failed": 0, 00:32:28.398 "io_timeout": 0, 00:32:28.398 "avg_latency_us": 84133.8109121739, 00:32:28.398 "min_latency_us": 10813.44, 00:32:28.398 "max_latency_us": 75147.94666666667 00:32:28.398 } 00:32:28.398 ], 00:32:28.398 "core_count": 1 00:32:28.398 } 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2972683 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2972683 ']' 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2972683 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2972683 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2972683' 00:32:28.398 killing process with pid 2972683 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2972683 00:32:28.398 Received shutdown signal, test time was about 10.000000 seconds 00:32:28.398 00:32:28.398 Latency(us) 00:32:28.398 [2024-12-05T13:22:34.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.398 [2024-12-05T13:22:34.698Z] =================================================================================================================== 00:32:28.398 [2024-12-05T13:22:34.698Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.398 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2972683 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.658 rmmod nvme_tcp 00:32:28.658 rmmod nvme_fabrics 00:32:28.658 rmmod nvme_keyring 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2972340 ']' 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2972340 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2972340 ']' 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2972340 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2972340 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2972340' 00:32:28.658 killing process with pid 2972340 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2972340 00:32:28.658 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2972340 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:28.918 14:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:28.918 14:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.918 14:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.918 14:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.918 14:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.918 14:22:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.828 00:32:30.828 real 0m22.586s 00:32:30.828 user 0m24.782s 00:32:30.828 sys 0m7.535s 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.828 ************************************ 00:32:30.828 END TEST nvmf_queue_depth 00:32:30.828 ************************************ 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.828 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:31.090 ************************************ 00:32:31.090 START TEST nvmf_target_multipath 00:32:31.090 ************************************ 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:31.090 * Looking for test storage... 00:32:31.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.090 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:31.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.091 --rc genhtml_branch_coverage=1 00:32:31.091 --rc genhtml_function_coverage=1 00:32:31.091 --rc genhtml_legend=1 00:32:31.091 --rc geninfo_all_blocks=1 00:32:31.091 --rc geninfo_unexecuted_blocks=1 00:32:31.091 00:32:31.091 ' 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.091 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.352 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.353 14:22:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:39.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:39.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:39.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:39.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:39.491 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:32:39.492 00:32:39.492 --- 10.0.0.2 ping statistics --- 00:32:39.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.492 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:32:39.492 00:32:39.492 --- 10.0.0.1 ping statistics --- 00:32:39.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.492 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:39.492 only one NIC for nvmf test 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:39.492 rmmod nvme_tcp 00:32:39.492 rmmod nvme_fabrics 00:32:39.492 rmmod nvme_keyring 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.492 14:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:40.873 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.874 00:32:40.874 real 0m9.944s 00:32:40.874 user 0m2.203s 00:32:40.874 sys 0m5.691s 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:40.874 ************************************ 00:32:40.874 END TEST nvmf_target_multipath 00:32:40.874 ************************************ 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.874 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.135 ************************************ 00:32:41.135 START TEST nvmf_zcopy 00:32:41.135 ************************************ 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:41.135 * Looking for test storage... 00:32:41.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.135 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.136 --rc genhtml_branch_coverage=1 00:32:41.136 --rc genhtml_function_coverage=1 00:32:41.136 --rc genhtml_legend=1 00:32:41.136 --rc geninfo_all_blocks=1 00:32:41.136 --rc geninfo_unexecuted_blocks=1 00:32:41.136 00:32:41.136 ' 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.136 --rc genhtml_branch_coverage=1 00:32:41.136 --rc genhtml_function_coverage=1 00:32:41.136 --rc genhtml_legend=1 00:32:41.136 --rc geninfo_all_blocks=1 00:32:41.136 --rc geninfo_unexecuted_blocks=1 00:32:41.136 00:32:41.136 ' 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.136 --rc genhtml_branch_coverage=1 00:32:41.136 --rc genhtml_function_coverage=1 00:32:41.136 --rc genhtml_legend=1 00:32:41.136 --rc geninfo_all_blocks=1 00:32:41.136 --rc geninfo_unexecuted_blocks=1 00:32:41.136 00:32:41.136 ' 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:41.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.136 --rc genhtml_branch_coverage=1 00:32:41.136 --rc genhtml_function_coverage=1 00:32:41.136 --rc genhtml_legend=1 00:32:41.136 --rc geninfo_all_blocks=1 00:32:41.136 --rc geninfo_unexecuted_blocks=1 00:32:41.136 00:32:41.136 ' 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.136 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.137 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.530 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.530 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.530 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.530 14:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.672 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:49.673 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:49.673 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:49.673 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:49.673 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:49.673 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:49.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:32:49.673 00:32:49.673 --- 10.0.0.2 ping statistics --- 00:32:49.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.674 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:32:49.674 00:32:49.674 --- 10.0.0.1 ping statistics --- 00:32:49.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.674 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2983007 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2983007 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2983007 ']' 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.674 14:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 [2024-12-05 14:22:54.956889] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:49.674 [2024-12-05 14:22:54.958001] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:49.674 [2024-12-05 14:22:54.958046] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:49.674 [2024-12-05 14:22:55.055141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.674 [2024-12-05 14:22:55.104903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:49.674 [2024-12-05 14:22:55.104951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:49.674 [2024-12-05 14:22:55.104960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:49.674 [2024-12-05 14:22:55.104967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:49.674 [2024-12-05 14:22:55.104974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:49.674 [2024-12-05 14:22:55.105716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.674 [2024-12-05 14:22:55.183001] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:49.674 [2024-12-05 14:22:55.183285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 [2024-12-05 14:22:55.826593] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 [2024-12-05 14:22:55.854885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 malloc0 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:49.674 { 00:32:49.674 "params": { 00:32:49.674 "name": "Nvme$subsystem", 00:32:49.674 "trtype": "$TEST_TRANSPORT", 00:32:49.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:49.674 "adrfam": "ipv4", 00:32:49.674 "trsvcid": "$NVMF_PORT", 00:32:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:49.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:49.674 "hdgst": ${hdgst:-false}, 00:32:49.674 "ddgst": ${ddgst:-false} 00:32:49.674 }, 00:32:49.674 "method": "bdev_nvme_attach_controller" 00:32:49.674 } 00:32:49.674 EOF 00:32:49.674 )") 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:49.674 14:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:49.674 "params": { 00:32:49.674 "name": "Nvme1", 00:32:49.674 "trtype": "tcp", 00:32:49.674 "traddr": "10.0.0.2", 00:32:49.674 "adrfam": "ipv4", 00:32:49.674 "trsvcid": "4420", 00:32:49.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:49.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:49.674 "hdgst": false, 00:32:49.674 "ddgst": false 00:32:49.674 }, 00:32:49.674 "method": "bdev_nvme_attach_controller" 00:32:49.674 }' 00:32:49.674 [2024-12-05 14:22:55.966232] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:32:49.674 [2024-12-05 14:22:55.966318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2983328 ] 00:32:49.935 [2024-12-05 14:22:56.060358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.935 [2024-12-05 14:22:56.113113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.196 Running I/O for 10 seconds... 00:32:52.080 6410.00 IOPS, 50.08 MiB/s [2024-12-05T13:22:59.322Z] 6468.50 IOPS, 50.54 MiB/s [2024-12-05T13:23:00.707Z] 6493.67 IOPS, 50.73 MiB/s [2024-12-05T13:23:01.651Z] 6494.25 IOPS, 50.74 MiB/s [2024-12-05T13:23:02.591Z] 6906.40 IOPS, 53.96 MiB/s [2024-12-05T13:23:03.533Z] 7373.67 IOPS, 57.61 MiB/s [2024-12-05T13:23:04.474Z] 7705.14 IOPS, 60.20 MiB/s [2024-12-05T13:23:05.422Z] 7956.38 IOPS, 62.16 MiB/s [2024-12-05T13:23:06.363Z] 8147.00 IOPS, 63.65 MiB/s [2024-12-05T13:23:06.363Z] 8304.90 IOPS, 64.88 MiB/s 00:33:00.063 Latency(us) 00:33:00.063 [2024-12-05T13:23:06.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.063 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:00.063 Verification LBA range: start 0x0 length 0x1000 00:33:00.063 Nvme1n1 : 10.01 8306.53 64.89 0.00 0.00 15361.83 744.11 27852.80 00:33:00.063 [2024-12-05T13:23:06.363Z] =================================================================================================================== 00:33:00.063 [2024-12-05T13:23:06.363Z] Total : 8306.53 64.89 0.00 0.00 15361.83 744.11 27852.80 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2985197 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:00.323 { 00:33:00.323 "params": { 00:33:00.323 "name": "Nvme$subsystem", 00:33:00.323 "trtype": "$TEST_TRANSPORT", 00:33:00.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:00.323 "adrfam": "ipv4", 00:33:00.323 "trsvcid": "$NVMF_PORT", 00:33:00.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:00.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:00.323 "hdgst": ${hdgst:-false}, 00:33:00.323 "ddgst": ${ddgst:-false} 00:33:00.323 }, 00:33:00.323 "method": "bdev_nvme_attach_controller" 00:33:00.323 } 00:33:00.323 EOF 00:33:00.323 )") 00:33:00.323 [2024-12-05 14:23:06.418122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.418149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:00.323 14:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:00.323 "params": { 00:33:00.323 "name": "Nvme1", 00:33:00.323 "trtype": "tcp", 00:33:00.323 "traddr": "10.0.0.2", 00:33:00.323 "adrfam": "ipv4", 00:33:00.323 "trsvcid": "4420", 00:33:00.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:00.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:00.323 "hdgst": false, 00:33:00.323 "ddgst": false 00:33:00.323 }, 00:33:00.323 "method": "bdev_nvme_attach_controller" 00:33:00.323 }' 00:33:00.323 [2024-12-05 14:23:06.430088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.430096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.442087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.442095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.454087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.454094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.461232] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:33:00.323 [2024-12-05 14:23:06.461280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2985197 ] 00:33:00.323 [2024-12-05 14:23:06.466086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.466094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.478086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.478098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.490087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.490095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.502086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.502094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.514086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.514093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.526086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.526093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.538086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.538094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.545167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.323 [2024-12-05 14:23:06.550087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.550095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.562087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.562096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.574087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.574097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.323 [2024-12-05 14:23:06.574384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.323 [2024-12-05 14:23:06.586091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.323 [2024-12-05 14:23:06.586099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.324 [2024-12-05 14:23:06.598094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.324 [2024-12-05 14:23:06.598106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.324 [2024-12-05 14:23:06.610090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.324 [2024-12-05 14:23:06.610100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.622088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.622098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.634087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.634094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.646094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.646109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.658088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.658097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.670088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.670101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.682086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.682094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.694086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.694098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.706085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.706092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.718086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.718096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.730086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.730095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.742086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.742093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.754085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.754092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.766087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.766096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.778085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.778093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.790085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.790091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.802086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.802092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.814086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.814094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.826085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.826092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.838085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.838092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.850086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.850093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 [2024-12-05 14:23:06.862093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.862107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.584 Running I/O for 5 seconds... 00:33:00.584 [2024-12-05 14:23:06.877206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.584 [2024-12-05 14:23:06.877221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.890221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.890238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.902178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.902193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.915696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.915711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.929603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.929625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.943060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.943075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.957669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.957684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.970739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.970753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.985090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.985105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:06.998354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:06.998368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.013125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.013140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.026168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.026182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.039016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.039030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.052909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.052925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.065982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.065997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.078692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.078706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.092868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.092883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.105685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.105700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.119100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.119115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:00.845 [2024-12-05 14:23:07.133206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:00.845 [2024-12-05 14:23:07.133221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.146380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.146395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.161362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.161376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.174518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.174532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.189601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.189620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.202806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.202820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.217754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.217769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.230829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.230844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.245289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.245304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.258506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.258528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.272874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.272889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.285965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.285980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.298639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.298653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.313387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.313402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.326511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.326525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.340980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.340995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.353820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.353835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.366470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.366484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.381256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.381270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.106 [2024-12-05 14:23:07.394235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.106 [2024-12-05 14:23:07.394249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.406896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.406910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.421238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.421253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.434104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.434118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.446592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.446607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.461163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.461178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.474283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.474298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.487053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.487067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.501146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.501161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.514129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.514143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.526806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.526821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.541142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.541157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.554185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.554200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.567186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.567200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.581512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.581527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.594399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.594413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.608888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.608902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.621894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.621908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.634416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.634430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.649048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.649063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.366 [2024-12-05 14:23:07.662018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.366 [2024-12-05 14:23:07.662033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.675028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.675043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.689339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.689354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.702602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.702615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.717545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.717560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.730850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.730864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.745185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.745200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.758444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.758463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.773135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.773150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.785920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.785935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.798810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.798825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.813792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.813807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.827177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.827192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.841196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.841211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.854180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.854195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.866966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.866980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 19101.00 IOPS, 149.23 MiB/s [2024-12-05T13:23:07.927Z] [2024-12-05 14:23:07.881289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.881305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.894175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.894191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.907097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.907112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.627 [2024-12-05 14:23:07.921694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.627 [2024-12-05 14:23:07.921709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:07.934698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:07.934713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:07.949295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:07.949310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:07.962250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:07.962265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:07.975175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:07.975190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:07.989707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:07.989722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.002535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.002550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.017752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.017767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.030538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.030552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.045426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.045441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.058160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.058175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.071159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.071174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.085135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.085150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.098071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.098086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.110754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.110769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.125394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.125409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.138209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.138224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.151033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.151048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.164942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.164957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.888 [2024-12-05 14:23:08.177885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.888 [2024-12-05 14:23:08.177900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.190821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.190836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.205385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.205403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.218225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.218240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.230983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.230998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.245833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.245848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.258958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.258972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.273439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.273459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.286378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.286392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.301042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.301057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.314253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.314267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.326916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.326930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.341686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.341702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.354626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.354641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.369486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.369501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.382792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.382806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.397605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.397620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.410628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.410642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.425232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.425247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.149 [2024-12-05 14:23:08.437967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.149 [2024-12-05 14:23:08.437982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.451393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.451408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.464914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.464933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.478032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.478048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.490757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.490772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.505072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.505087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.517704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.517720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.530213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.530229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.542898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.542913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.557151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.557166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.570082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.570097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.582727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.582742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.596921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.596936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.610245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.610261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.622942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.622956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.637431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.637446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.650541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.650555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.665595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.665609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.678740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.678756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.692928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.692943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.412 [2024-12-05 14:23:08.705879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.412 [2024-12-05 14:23:08.705894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.718531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.674 [2024-12-05 14:23:08.718549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.733220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.674 [2024-12-05 14:23:08.733235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.746317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.674 [2024-12-05 14:23:08.746331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.759065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.674 [2024-12-05 14:23:08.759080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.773043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.674 [2024-12-05 14:23:08.773058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.786062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.674 [2024-12-05 14:23:08.786077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.674 [2024-12-05 14:23:08.799585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.799600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.813669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.813683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.826383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.826397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.841244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.841259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.854460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.854474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.868916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.868930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 19126.00 IOPS, 149.42 MiB/s [2024-12-05T13:23:08.975Z] [2024-12-05 14:23:08.881750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.881765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.895043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.895057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.909323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.909338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.922712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.922726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.937424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.937438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.950764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.950778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.675 [2024-12-05 14:23:08.965083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.675 [2024-12-05 14:23:08.965098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:08.977692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:08.977707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:08.990834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:08.990848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.005997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.006012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.018758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.018772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.033637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.033652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.046676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.046690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.061301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.061316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.074131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.074145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.086800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.086815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.101260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.101275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.114270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.114284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.126911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.126924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.140858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.140872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.154067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.154081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.166517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.166531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.181335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.181350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.194217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.194232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.207030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.207044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.936 [2024-12-05 14:23:09.221175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.936 [2024-12-05 14:23:09.221189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.233985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.234000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.247378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.247393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.261325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.261339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.274238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.274252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.286979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.286993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.301335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.301350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.314181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.314196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.326786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.326801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.341114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.341128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.354132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.354147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.366813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.366827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.381133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.381147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.394135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.394150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.406712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.406726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.421357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.421372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.434190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.434205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.446771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.446785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.461512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.461527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.474278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.474292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.198 [2024-12-05 14:23:09.486900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.198 [2024-12-05 14:23:09.486914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.501469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.501484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.514691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.514706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.529446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.529464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.542504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.542518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.557010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.557025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.570119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.570133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.582879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.582894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.597048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.597063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.610169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.610184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.623192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.623207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.637270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.637286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.650380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.650395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.665413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.665428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.678426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.678440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.693282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.693298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.706322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.706337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.719133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.719147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.733439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.733458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.459 [2024-12-05 14:23:09.746227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.459 [2024-12-05 14:23:09.746242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.758978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.758993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.773621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.773636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.786549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.786564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.801079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.801094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.813866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.813880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.826527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.826541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.841276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.841291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.854486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.854500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.869163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.869178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 19137.33 IOPS, 149.51 MiB/s [2024-12-05T13:23:10.020Z] [2024-12-05 14:23:09.882099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.882114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.894753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.894767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.908918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.908933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.921952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.921967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.934888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.934902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.948983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.948998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.962021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.962037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.974804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.974819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:09.989162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:09.989181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:10.002622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:10.002640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.720 [2024-12-05 14:23:10.015166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.720 [2024-12-05 14:23:10.015181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.029163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.029178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.042043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.042057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.054986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.055001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.069152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.069168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.082633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.082647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.097276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.097291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.110277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.110292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.122850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.122865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.137204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.137219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.150203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.150217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.156463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.156477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.981 [2024-12-05 14:23:10.169349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.981 [2024-12-05 14:23:10.169363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.182874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.182888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.195246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.195260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.209248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.209263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.222149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.222163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.235191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.235210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.249312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.249327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.262136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.262151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.982 [2024-12-05 14:23:10.275099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.982 [2024-12-05 14:23:10.275113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.289561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.289577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.302795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.302809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.317280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.317295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.330259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.330274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.343399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.343413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.357395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.357409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.370465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.370480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.384895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.384909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.397773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.397788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.411361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.411375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.425377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.425392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.438242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.438256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.451095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.451110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.465640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.465654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.478708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.478722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.492856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.492875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.505858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.505872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.518670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.518684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.242 [2024-12-05 14:23:10.532712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.242 [2024-12-05 14:23:10.532726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.502 [2024-12-05 14:23:10.545589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.502 [2024-12-05 14:23:10.545605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.502 [2024-12-05 14:23:10.558496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.558510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.573397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.573412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.586377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.586391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.601394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.601409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.614503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.614516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.628937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.628951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.641941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.641955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.655053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.655067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.669544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.669560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.682449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.682467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.697654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.697669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.710643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.710656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.725410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.725424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.738270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.738284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.751018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.751032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.765606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.765621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.778577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.778591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.503 [2024-12-05 14:23:10.793637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.503 [2024-12-05 14:23:10.793651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.806513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.806527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.821566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.821580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.834765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.834779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.849162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.849177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.862131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.862145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.874963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.874976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 19131.50 IOPS, 149.46 MiB/s [2024-12-05T13:23:11.063Z] [2024-12-05 14:23:10.889220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.889234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.902092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.902106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.914794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.914808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.928998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.929012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.942257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.942271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.954979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.954993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.969362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.969377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.982269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.982283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:10.994923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.763 [2024-12-05 14:23:10.994937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.763 [2024-12-05 14:23:11.009351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.764 [2024-12-05 14:23:11.009366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.764 [2024-12-05 14:23:11.022281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.764 [2024-12-05 14:23:11.022295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.764 [2024-12-05 14:23:11.035380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.764 [2024-12-05 14:23:11.035394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.764 [2024-12-05 14:23:11.049242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.764 [2024-12-05 14:23:11.049256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.062258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.062272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.074724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.074737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.089451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.089470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.102485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.102498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.117198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.117212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.130198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.130212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.143301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.143315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.157582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.157596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.170207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.170221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.183133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.183147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.197103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.197117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.209951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.209965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.223277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.223291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.237213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.237227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.250486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.250505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.023 [2024-12-05 14:23:11.265947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.023 [2024-12-05 14:23:11.265961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.024 [2024-12-05 14:23:11.279158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.024 [2024-12-05 14:23:11.279172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.024 [2024-12-05 14:23:11.293122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.024 [2024-12-05 14:23:11.293137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.024 [2024-12-05 14:23:11.306483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.024 [2024-12-05 14:23:11.306496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.321436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.321451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.334605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.334618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.349681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.349695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.362600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.362613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.377706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.377722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.390533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.390547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.405182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.405197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.418193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.418208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.431290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.431304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.445122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.445136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.458245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.458259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.470606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.470620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.485459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.485474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.498797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.498812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.513523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.513542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.526235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.526250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.538939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.538953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.553665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.553679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.566471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.566485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.284 [2024-12-05 14:23:11.581359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.284 [2024-12-05 14:23:11.581374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.594302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.594317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.607113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.607127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.620944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.620959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.634090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.634104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.647141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.647155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.661228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.661242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.674438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.674452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.689422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.689437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.702848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.702864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.717367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.717382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.730350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.730364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.744770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.744784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.757827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.757841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.770625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.770647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.785602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.785617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.798857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.798871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.812531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.812545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.825487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.825502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.545 [2024-12-05 14:23:11.838866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.545 [2024-12-05 14:23:11.838880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.853116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.853130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.865679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.865693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.878772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.878786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 19131.20 IOPS, 149.46 MiB/s [2024-12-05T13:23:12.105Z] [2024-12-05 14:23:11.890668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.890682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 00:33:05.805 Latency(us) 00:33:05.805 [2024-12-05T13:23:12.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:05.805 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:05.805 Nvme1n1 : 5.01 19131.51 149.46 0.00 0.00 6684.45 2648.75 11359.57 00:33:05.805 [2024-12-05T13:23:12.105Z] =================================================================================================================== 00:33:05.805 [2024-12-05T13:23:12.105Z] Total : 19131.51 149.46 0.00 0.00 6684.45 2648.75 11359.57 00:33:05.805 [2024-12-05 14:23:11.902092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.902106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.914096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.914108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.926093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.926104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.938094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.938105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.950089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.950099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.962087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.962095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.974089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.974099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.986089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.986099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 [2024-12-05 14:23:11.998086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.805 [2024-12-05 14:23:11.998094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2985197) - No such process 00:33:05.805 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2985197 00:33:05.805 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:05.806 delay0 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.806 14:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:06.065 [2024-12-05 14:23:12.165813] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:12.644 Initializing NVMe Controllers 00:33:12.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:12.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:12.644 Initialization complete. Launching workers. 00:33:12.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 298, failed: 9008 00:33:12.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9243, failed to submit 63 00:33:12.644 success 9127, unsuccessful 116, failed 0 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:12.644 rmmod nvme_tcp 00:33:12.644 rmmod nvme_fabrics 00:33:12.644 rmmod nvme_keyring 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2983007 ']' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2983007 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2983007 ']' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2983007 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2983007 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2983007' 00:33:12.644 killing process with pid 2983007 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2983007 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2983007 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.644 14:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.200 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.200 00:33:15.200 real 0m33.706s 00:33:15.200 user 0m43.035s 00:33:15.200 sys 0m12.263s 00:33:15.200 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.200 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.200 ************************************ 00:33:15.200 END TEST nvmf_zcopy 00:33:15.200 ************************************ 00:33:15.200 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:15.201 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:15.201 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.201 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.201 ************************************ 00:33:15.201 START TEST nvmf_nmic 00:33:15.201 ************************************ 00:33:15.201 14:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:15.201 * Looking for test storage... 00:33:15.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.201 --rc genhtml_branch_coverage=1 00:33:15.201 --rc genhtml_function_coverage=1 00:33:15.201 --rc genhtml_legend=1 00:33:15.201 --rc geninfo_all_blocks=1 00:33:15.201 --rc geninfo_unexecuted_blocks=1 00:33:15.201 00:33:15.201 ' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.201 --rc genhtml_branch_coverage=1 00:33:15.201 --rc genhtml_function_coverage=1 00:33:15.201 --rc genhtml_legend=1 00:33:15.201 --rc geninfo_all_blocks=1 00:33:15.201 --rc geninfo_unexecuted_blocks=1 00:33:15.201 00:33:15.201 ' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.201 --rc genhtml_branch_coverage=1 00:33:15.201 --rc genhtml_function_coverage=1 00:33:15.201 --rc genhtml_legend=1 00:33:15.201 --rc geninfo_all_blocks=1 00:33:15.201 --rc geninfo_unexecuted_blocks=1 00:33:15.201 00:33:15.201 ' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:15.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.201 --rc genhtml_branch_coverage=1 00:33:15.201 --rc genhtml_function_coverage=1 00:33:15.201 --rc genhtml_legend=1 00:33:15.201 --rc geninfo_all_blocks=1 00:33:15.201 --rc geninfo_unexecuted_blocks=1 00:33:15.201 00:33:15.201 ' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.201 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.202 14:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.341 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:23.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:23.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:23.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:23.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:33:23.342 00:33:23.342 --- 10.0.0.2 ping statistics --- 00:33:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.342 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:33:23.342 00:33:23.342 --- 10.0.0.1 ping statistics --- 00:33:23.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.342 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2991695 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2991695 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2991695 ']' 00:33:23.342 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 [2024-12-05 14:23:28.747654] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.343 [2024-12-05 14:23:28.748797] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:33:23.343 [2024-12-05 14:23:28.748847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.343 [2024-12-05 14:23:28.823709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:23.343 [2024-12-05 14:23:28.872365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.343 [2024-12-05 14:23:28.872418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.343 [2024-12-05 14:23:28.872424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.343 [2024-12-05 14:23:28.872430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.343 [2024-12-05 14:23:28.872434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.343 [2024-12-05 14:23:28.874558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.343 [2024-12-05 14:23:28.874729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.343 [2024-12-05 14:23:28.874967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.343 [2024-12-05 14:23:28.874968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.343 [2024-12-05 14:23:28.948651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.343 [2024-12-05 14:23:28.949544] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.343 [2024-12-05 14:23:28.949777] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:23.343 [2024-12-05 14:23:28.950475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:23.343 [2024-12-05 14:23:28.950520] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.343 14:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 [2024-12-05 14:23:29.039960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 Malloc0 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 [2024-12-05 14:23:29.132318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:23.343 test case1: single bdev can't be used in multiple subsystems 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 [2024-12-05 14:23:29.167575] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:23.343 [2024-12-05 14:23:29.167601] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:23.343 [2024-12-05 14:23:29.167610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:23.343 request: 00:33:23.343 { 00:33:23.343 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:23.343 "namespace": { 00:33:23.343 "bdev_name": "Malloc0", 00:33:23.343 "no_auto_visible": false, 00:33:23.343 "hide_metadata": false 00:33:23.343 }, 00:33:23.343 "method": "nvmf_subsystem_add_ns", 00:33:23.343 "req_id": 1 00:33:23.343 } 00:33:23.343 Got JSON-RPC error response 00:33:23.343 response: 00:33:23.343 { 00:33:23.343 "code": -32602, 00:33:23.343 "message": "Invalid parameters" 00:33:23.343 } 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:23.343 Adding namespace failed - expected result. 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:23.343 test case2: host connect to nvmf target in multiple paths 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:23.343 [2024-12-05 14:23:29.179730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:23.343 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:23.917 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:23.917 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:33:23.917 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:23.917 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:23.917 14:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:33:25.835 14:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:25.835 [global] 00:33:25.835 thread=1 00:33:25.835 invalidate=1 00:33:25.835 rw=write 00:33:25.835 time_based=1 00:33:25.835 runtime=1 00:33:25.835 ioengine=libaio 00:33:25.835 direct=1 00:33:25.835 bs=4096 00:33:25.835 iodepth=1 00:33:25.835 norandommap=0 00:33:25.836 numjobs=1 00:33:25.836 00:33:25.836 verify_dump=1 00:33:25.836 verify_backlog=512 00:33:25.836 verify_state_save=0 00:33:25.836 do_verify=1 00:33:25.836 verify=crc32c-intel 00:33:25.836 [job0] 00:33:25.836 filename=/dev/nvme0n1 00:33:25.836 Could not set queue depth (nvme0n1) 00:33:26.406 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:26.406 fio-3.35 00:33:26.406 Starting 1 thread 00:33:27.348 00:33:27.349 job0: (groupid=0, jobs=1): err= 0: pid=2992568: Thu Dec 5 14:23:33 2024 00:33:27.349 read: IOPS=17, BW=69.7KiB/s (71.4kB/s)(72.0KiB/1033msec) 00:33:27.349 slat (nsec): min=25898, max=31985, avg=26902.89, stdev=1420.71 00:33:27.349 clat (usec): min=796, max=42975, avg=37547.47, stdev=13333.47 00:33:27.349 lat (usec): min=828, max=43002, avg=37574.37, stdev=13332.60 00:33:27.349 clat percentiles (usec): 00:33:27.349 | 1.00th=[ 799], 5.00th=[ 799], 10.00th=[ 1074], 20.00th=[41157], 00:33:27.349 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:27.349 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:33:27.349 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:27.349 | 99.99th=[42730] 00:33:27.349 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:33:27.349 slat (usec): min=10, max=29906, avg=90.68, stdev=1320.29 00:33:27.349 clat (usec): min=235, max=897, avg=597.27, stdev=105.64 00:33:27.349 lat (usec): min=247, max=30597, avg=687.95, stdev=1328.92 00:33:27.349 clat percentiles (usec): 00:33:27.349 | 1.00th=[ 355], 5.00th=[ 396], 10.00th=[ 457], 20.00th=[ 506], 00:33:27.349 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 644], 00:33:27.349 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 750], 00:33:27.349 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 898], 99.95th=[ 898], 00:33:27.349 | 99.99th=[ 898] 00:33:27.349 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:27.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:27.349 lat (usec) : 250=0.19%, 500=18.11%, 750=73.77%, 1000=4.72% 00:33:27.349 lat (msec) : 2=0.19%, 50=3.02% 00:33:27.349 cpu : usr=0.87%, sys=1.55%, ctx=533, majf=0, minf=1 00:33:27.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:27.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:27.349 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:27.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:27.349 00:33:27.349 Run status group 0 (all jobs): 00:33:27.349 READ: bw=69.7KiB/s (71.4kB/s), 69.7KiB/s-69.7KiB/s (71.4kB/s-71.4kB/s), io=72.0KiB (73.7kB), run=1033-1033msec 00:33:27.349 WRITE: bw=1983KiB/s (2030kB/s), 1983KiB/s-1983KiB/s (2030kB/s-2030kB/s), io=2048KiB (2097kB), run=1033-1033msec 00:33:27.349 00:33:27.349 Disk stats (read/write): 00:33:27.349 nvme0n1: ios=39/512, merge=0/0, ticks=1471/297, in_queue=1768, util=98.80% 00:33:27.349 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:27.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:27.610 rmmod nvme_tcp 00:33:27.610 rmmod nvme_fabrics 00:33:27.610 rmmod nvme_keyring 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2991695 ']' 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2991695 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2991695 ']' 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2991695 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2991695 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2991695' 00:33:27.610 killing process with pid 2991695 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2991695 00:33:27.610 14:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2991695 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.871 14:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:30.413 00:33:30.413 real 0m15.108s 00:33:30.413 user 0m35.124s 00:33:30.413 sys 0m7.079s 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:30.413 ************************************ 00:33:30.413 END TEST nvmf_nmic 00:33:30.413 ************************************ 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:30.413 ************************************ 00:33:30.413 START TEST nvmf_fio_target 00:33:30.413 ************************************ 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:30.413 * Looking for test storage... 00:33:30.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.413 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:30.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.414 --rc genhtml_branch_coverage=1 00:33:30.414 --rc genhtml_function_coverage=1 00:33:30.414 --rc genhtml_legend=1 00:33:30.414 --rc geninfo_all_blocks=1 00:33:30.414 --rc geninfo_unexecuted_blocks=1 00:33:30.414 00:33:30.414 ' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:30.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.414 --rc genhtml_branch_coverage=1 00:33:30.414 --rc genhtml_function_coverage=1 00:33:30.414 --rc genhtml_legend=1 00:33:30.414 --rc geninfo_all_blocks=1 00:33:30.414 --rc geninfo_unexecuted_blocks=1 00:33:30.414 00:33:30.414 ' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:30.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.414 --rc genhtml_branch_coverage=1 00:33:30.414 --rc genhtml_function_coverage=1 00:33:30.414 --rc genhtml_legend=1 00:33:30.414 --rc geninfo_all_blocks=1 00:33:30.414 --rc geninfo_unexecuted_blocks=1 00:33:30.414 00:33:30.414 ' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:30.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.414 --rc genhtml_branch_coverage=1 00:33:30.414 --rc genhtml_function_coverage=1 00:33:30.414 --rc genhtml_legend=1 00:33:30.414 --rc geninfo_all_blocks=1 00:33:30.414 --rc geninfo_unexecuted_blocks=1 00:33:30.414 00:33:30.414 ' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:30.414 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:30.415 14:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:38.553 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:38.554 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:38.554 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:38.554 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:38.554 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.554 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:38.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:33:38.555 00:33:38.555 --- 10.0.0.2 ping statistics --- 00:33:38.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.555 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:33:38.555 00:33:38.555 --- 10.0.0.1 ping statistics --- 00:33:38.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.555 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2996906 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2996906 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2996906 ']' 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.555 14:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.555 [2024-12-05 14:23:43.937150] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:38.555 [2024-12-05 14:23:43.938263] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:33:38.555 [2024-12-05 14:23:43.938310] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.555 [2024-12-05 14:23:44.037626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:38.555 [2024-12-05 14:23:44.090208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.555 [2024-12-05 14:23:44.090257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.555 [2024-12-05 14:23:44.090267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.555 [2024-12-05 14:23:44.090275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.555 [2024-12-05 14:23:44.090281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.555 [2024-12-05 14:23:44.092374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.555 [2024-12-05 14:23:44.092537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.555 [2024-12-05 14:23:44.092587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.555 [2024-12-05 14:23:44.092589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.555 [2024-12-05 14:23:44.171264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:38.555 [2024-12-05 14:23:44.172291] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:38.555 [2024-12-05 14:23:44.172504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:38.555 [2024-12-05 14:23:44.173017] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:38.555 [2024-12-05 14:23:44.173075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.555 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:38.815 [2024-12-05 14:23:44.945942] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.815 14:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:39.074 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:39.074 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:39.333 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:39.333 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:39.333 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:39.333 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:39.593 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:39.593 14:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:39.852 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:40.112 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:40.113 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:40.113 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:40.374 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:40.374 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:40.374 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:40.636 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:40.896 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:40.896 14:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.896 14:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:40.896 14:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:41.158 14:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.419 [2024-12-05 14:23:47.537837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.419 14:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:41.680 14:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:41.680 14:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:42.252 14:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:42.252 14:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:33:42.252 14:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:42.252 14:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:33:42.252 14:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:33:42.252 14:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:33:44.166 14:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:44.425 [global] 00:33:44.425 thread=1 00:33:44.425 invalidate=1 00:33:44.425 rw=write 00:33:44.425 time_based=1 00:33:44.425 runtime=1 00:33:44.425 ioengine=libaio 00:33:44.425 direct=1 00:33:44.425 bs=4096 00:33:44.425 iodepth=1 00:33:44.425 norandommap=0 00:33:44.425 numjobs=1 00:33:44.425 00:33:44.425 verify_dump=1 00:33:44.425 verify_backlog=512 00:33:44.425 verify_state_save=0 00:33:44.425 do_verify=1 00:33:44.425 verify=crc32c-intel 00:33:44.425 [job0] 00:33:44.425 filename=/dev/nvme0n1 00:33:44.425 [job1] 00:33:44.425 filename=/dev/nvme0n2 00:33:44.425 [job2] 00:33:44.425 filename=/dev/nvme0n3 00:33:44.425 [job3] 00:33:44.425 filename=/dev/nvme0n4 00:33:44.425 Could not set queue depth (nvme0n1) 00:33:44.425 Could not set queue depth (nvme0n2) 00:33:44.425 Could not set queue depth (nvme0n3) 00:33:44.425 Could not set queue depth (nvme0n4) 00:33:44.685 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:44.685 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:44.685 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:44.685 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:44.685 fio-3.35 00:33:44.685 Starting 4 threads 00:33:46.070 00:33:46.070 job0: (groupid=0, jobs=1): err= 0: pid=2998481: Thu Dec 5 14:23:52 2024 00:33:46.070 read: IOPS=551, BW=2206KiB/s (2259kB/s)(2208KiB/1001msec) 00:33:46.070 slat (nsec): min=5969, max=46454, avg=25010.24, stdev=7874.75 00:33:46.070 clat (usec): min=266, max=940, avg=715.79, stdev=99.39 00:33:46.070 lat (usec): min=294, max=968, avg=740.80, stdev=99.43 00:33:46.070 clat percentiles (usec): 00:33:46.070 | 1.00th=[ 469], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 635], 00:33:46.070 | 30.00th=[ 676], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 750], 00:33:46.070 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 832], 95.00th=[ 857], 00:33:46.070 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 938], 99.95th=[ 938], 00:33:46.070 | 99.99th=[ 938] 00:33:46.070 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:33:46.070 slat (usec): min=8, max=38132, avg=69.82, stdev=1190.85 00:33:46.070 clat (usec): min=115, max=1127, avg=496.40, stdev=132.39 00:33:46.070 lat (usec): min=124, max=38699, avg=566.22, stdev=1200.46 00:33:46.070 clat percentiles (usec): 00:33:46.070 | 1.00th=[ 155], 5.00th=[ 273], 10.00th=[ 330], 20.00th=[ 383], 00:33:46.070 | 30.00th=[ 433], 40.00th=[ 465], 50.00th=[ 494], 60.00th=[ 529], 00:33:46.070 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 660], 95.00th=[ 701], 00:33:46.070 | 99.00th=[ 799], 99.50th=[ 857], 99.90th=[ 914], 99.95th=[ 1123], 00:33:46.070 | 99.99th=[ 1123] 00:33:46.070 bw ( KiB/s): min= 4096, max= 4096, per=37.75%, avg=4096.00, stdev= 0.00, samples=1 00:33:46.070 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:46.070 lat (usec) : 250=2.09%, 500=32.55%, 750=49.30%, 1000=15.99% 00:33:46.070 lat (msec) : 2=0.06% 00:33:46.070 cpu : usr=3.80%, sys=5.40%, ctx=1581, majf=0, minf=1 00:33:46.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.070 issued rwts: total=552,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.070 job1: (groupid=0, jobs=1): err= 0: pid=2998483: Thu Dec 5 14:23:52 2024 00:33:46.070 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:46.070 slat (nsec): min=7250, max=47560, avg=27535.60, stdev=2381.79 00:33:46.070 clat (usec): min=497, max=1726, avg=989.14, stdev=108.03 00:33:46.070 lat (usec): min=524, max=1759, avg=1016.67, stdev=108.25 00:33:46.070 clat percentiles (usec): 00:33:46.070 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 857], 20.00th=[ 914], 00:33:46.070 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1020], 00:33:46.070 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:33:46.070 | 99.00th=[ 1205], 99.50th=[ 1287], 99.90th=[ 1729], 99.95th=[ 1729], 00:33:46.070 | 99.99th=[ 1729] 00:33:46.070 write: IOPS=726, BW=2905KiB/s (2975kB/s)(2908KiB/1001msec); 0 zone resets 00:33:46.070 slat (nsec): min=9575, max=65183, avg=32667.73, stdev=9390.47 00:33:46.070 clat (usec): min=216, max=1196, avg=613.70, stdev=129.48 00:33:46.070 lat (usec): min=229, max=1232, avg=646.36, stdev=132.51 00:33:46.070 clat percentiles (usec): 00:33:46.070 | 1.00th=[ 285], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 515], 00:33:46.070 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:33:46.070 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 766], 95.00th=[ 807], 00:33:46.070 | 99.00th=[ 898], 99.50th=[ 963], 99.90th=[ 1205], 99.95th=[ 1205], 00:33:46.070 | 99.99th=[ 1205] 00:33:46.070 bw ( KiB/s): min= 4096, max= 4096, per=37.75%, avg=4096.00, stdev= 0.00, samples=1 00:33:46.070 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:46.070 lat (usec) : 250=0.08%, 500=10.49%, 750=41.00%, 1000=28.89% 00:33:46.070 lat (msec) : 2=19.53% 00:33:46.070 cpu : usr=2.10%, sys=5.50%, ctx=1240, majf=0, minf=1 00:33:46.070 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.070 issued rwts: total=512,727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.070 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.070 job2: (groupid=0, jobs=1): err= 0: pid=2998489: Thu Dec 5 14:23:52 2024 00:33:46.070 read: IOPS=19, BW=78.2KiB/s (80.1kB/s)(80.0KiB/1023msec) 00:33:46.070 slat (nsec): min=26045, max=27021, avg=26480.35, stdev=285.99 00:33:46.071 clat (usec): min=905, max=41940, avg=39088.69, stdev=8993.65 00:33:46.071 lat (usec): min=932, max=41966, avg=39115.17, stdev=8993.54 00:33:46.071 clat percentiles (usec): 00:33:46.071 | 1.00th=[ 906], 5.00th=[ 906], 10.00th=[40633], 20.00th=[40633], 00:33:46.071 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:46.071 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:46.071 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:46.071 | 99.99th=[41681] 00:33:46.071 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:33:46.071 slat (nsec): min=9055, max=71428, avg=29704.77, stdev=10550.50 00:33:46.071 clat (usec): min=195, max=801, avg=432.92, stdev=122.31 00:33:46.071 lat (usec): min=223, max=834, avg=462.62, stdev=125.69 00:33:46.071 clat percentiles (usec): 00:33:46.071 | 1.00th=[ 206], 5.00th=[ 231], 10.00th=[ 289], 20.00th=[ 326], 00:33:46.071 | 30.00th=[ 347], 40.00th=[ 379], 50.00th=[ 429], 60.00th=[ 469], 00:33:46.071 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 603], 95.00th=[ 652], 00:33:46.071 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 799], 99.95th=[ 799], 00:33:46.071 | 99.99th=[ 799] 00:33:46.071 bw ( KiB/s): min= 4096, max= 4096, per=37.75%, avg=4096.00, stdev= 0.00, samples=1 00:33:46.071 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:46.071 lat (usec) : 250=6.02%, 500=59.96%, 750=29.70%, 1000=0.75% 00:33:46.071 lat (msec) : 50=3.57% 00:33:46.071 cpu : usr=1.17%, sys=1.76%, ctx=532, majf=0, minf=2 00:33:46.071 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.071 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.071 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.071 job3: (groupid=0, jobs=1): err= 0: pid=2998490: Thu Dec 5 14:23:52 2024 00:33:46.071 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:33:46.071 slat (nsec): min=27003, max=29234, avg=28234.72, stdev=475.25 00:33:46.071 clat (usec): min=987, max=42763, avg=39669.13, stdev=9659.25 00:33:46.071 lat (usec): min=1015, max=42792, avg=39697.37, stdev=9659.11 00:33:46.071 clat percentiles (usec): 00:33:46.071 | 1.00th=[ 988], 5.00th=[ 988], 10.00th=[41157], 20.00th=[41681], 00:33:46.071 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:46.071 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:33:46.071 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:46.071 | 99.99th=[42730] 00:33:46.071 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:33:46.071 slat (nsec): min=9101, max=59593, avg=32038.69, stdev=11161.60 00:33:46.071 clat (usec): min=201, max=1125, avg=562.28, stdev=143.37 00:33:46.071 lat (usec): min=212, max=1162, avg=594.32, stdev=147.76 00:33:46.071 clat percentiles (usec): 00:33:46.071 | 1.00th=[ 258], 5.00th=[ 334], 10.00th=[ 363], 20.00th=[ 429], 00:33:46.071 | 30.00th=[ 474], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 611], 00:33:46.071 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:33:46.071 | 99.00th=[ 840], 99.50th=[ 889], 99.90th=[ 1123], 99.95th=[ 1123], 00:33:46.071 | 99.99th=[ 1123] 00:33:46.071 bw ( KiB/s): min= 4096, max= 4096, per=37.75%, avg=4096.00, stdev= 0.00, samples=1 00:33:46.071 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:46.071 lat (usec) : 250=0.75%, 500=32.83%, 750=55.09%, 1000=7.92% 00:33:46.071 lat (msec) : 2=0.19%, 50=3.21% 00:33:46.071 cpu : usr=0.78%, sys=2.35%, ctx=531, majf=0, minf=1 00:33:46.071 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.071 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.071 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.071 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.071 00:33:46.071 Run status group 0 (all jobs): 00:33:46.071 READ: bw=4309KiB/s (4412kB/s), 70.4KiB/s-2206KiB/s (72.1kB/s-2259kB/s), io=4408KiB (4514kB), run=1001-1023msec 00:33:46.071 WRITE: bw=10.6MiB/s (11.1MB/s), 2002KiB/s-4092KiB/s (2050kB/s-4190kB/s), io=10.8MiB (11.4MB), run=1001-1023msec 00:33:46.071 00:33:46.071 Disk stats (read/write): 00:33:46.071 nvme0n1: ios=563/768, merge=0/0, ticks=434/315, in_queue=749, util=86.97% 00:33:46.071 nvme0n2: ios=533/512, merge=0/0, ticks=1077/245, in_queue=1322, util=87.74% 00:33:46.071 nvme0n3: ios=72/512, merge=0/0, ticks=679/168, in_queue=847, util=94.92% 00:33:46.071 nvme0n4: ios=77/512, merge=0/0, ticks=949/229, in_queue=1178, util=96.90% 00:33:46.071 14:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:46.071 [global] 00:33:46.071 thread=1 00:33:46.071 invalidate=1 00:33:46.071 rw=randwrite 00:33:46.071 time_based=1 00:33:46.071 runtime=1 00:33:46.071 ioengine=libaio 00:33:46.071 direct=1 00:33:46.071 bs=4096 00:33:46.071 iodepth=1 00:33:46.071 norandommap=0 00:33:46.071 numjobs=1 00:33:46.071 00:33:46.071 verify_dump=1 00:33:46.071 verify_backlog=512 00:33:46.071 verify_state_save=0 00:33:46.071 do_verify=1 00:33:46.071 verify=crc32c-intel 00:33:46.071 [job0] 00:33:46.071 filename=/dev/nvme0n1 00:33:46.071 [job1] 00:33:46.071 filename=/dev/nvme0n2 00:33:46.071 [job2] 00:33:46.071 filename=/dev/nvme0n3 00:33:46.071 [job3] 00:33:46.071 filename=/dev/nvme0n4 00:33:46.071 Could not set queue depth (nvme0n1) 00:33:46.071 Could not set queue depth (nvme0n2) 00:33:46.071 Could not set queue depth (nvme0n3) 00:33:46.071 Could not set queue depth (nvme0n4) 00:33:46.332 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.332 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.332 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.332 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:46.332 fio-3.35 00:33:46.332 Starting 4 threads 00:33:47.716 00:33:47.716 job0: (groupid=0, jobs=1): err= 0: pid=2999008: Thu Dec 5 14:23:53 2024 00:33:47.716 read: IOPS=16, BW=67.9KiB/s (69.6kB/s)(68.0KiB/1001msec) 00:33:47.716 slat (nsec): min=8951, max=25841, avg=24522.35, stdev=4020.17 00:33:47.716 clat (usec): min=1112, max=42085, avg=39226.05, stdev=9831.25 00:33:47.716 lat (usec): min=1137, max=42110, avg=39250.57, stdev=9831.14 00:33:47.716 clat percentiles (usec): 00:33:47.716 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[40633], 20.00th=[41157], 00:33:47.716 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:33:47.716 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:47.716 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:47.716 | 99.99th=[42206] 00:33:47.716 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:33:47.716 slat (nsec): min=9407, max=54411, avg=29677.15, stdev=7217.63 00:33:47.716 clat (usec): min=131, max=1155, avg=613.55, stdev=165.03 00:33:47.716 lat (usec): min=153, max=1186, avg=643.23, stdev=167.01 00:33:47.716 clat percentiles (usec): 00:33:47.716 | 1.00th=[ 186], 5.00th=[ 347], 10.00th=[ 396], 20.00th=[ 478], 00:33:47.716 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 660], 00:33:47.716 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 824], 95.00th=[ 881], 00:33:47.716 | 99.00th=[ 1020], 99.50th=[ 1057], 99.90th=[ 1156], 99.95th=[ 1156], 00:33:47.716 | 99.99th=[ 1156] 00:33:47.716 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:33:47.716 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:47.716 lat (usec) : 250=1.51%, 500=21.36%, 750=56.14%, 1000=16.26% 00:33:47.716 lat (msec) : 2=1.70%, 50=3.02% 00:33:47.716 cpu : usr=0.90%, sys=1.50%, ctx=529, majf=0, minf=1 00:33:47.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.716 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:47.716 job1: (groupid=0, jobs=1): err= 0: pid=2999009: Thu Dec 5 14:23:53 2024 00:33:47.716 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:47.716 slat (nsec): min=8089, max=61284, avg=26678.91, stdev=2758.97 00:33:47.716 clat (usec): min=658, max=5532, avg=1014.56, stdev=216.80 00:33:47.716 lat (usec): min=689, max=5564, avg=1041.23, stdev=216.85 00:33:47.716 clat percentiles (usec): 00:33:47.716 | 1.00th=[ 783], 5.00th=[ 848], 10.00th=[ 906], 20.00th=[ 947], 00:33:47.716 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1012], 60.00th=[ 1029], 00:33:47.716 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:33:47.716 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 5538], 99.95th=[ 5538], 00:33:47.716 | 99.99th=[ 5538] 00:33:47.716 write: IOPS=695, BW=2781KiB/s (2848kB/s)(2784KiB/1001msec); 0 zone resets 00:33:47.716 slat (nsec): min=8890, max=67327, avg=28979.44, stdev=9596.86 00:33:47.716 clat (usec): min=242, max=4270, avg=628.54, stdev=178.68 00:33:47.716 lat (usec): min=256, max=4307, avg=657.52, stdev=181.69 00:33:47.716 clat percentiles (usec): 00:33:47.716 | 1.00th=[ 363], 5.00th=[ 429], 10.00th=[ 465], 20.00th=[ 523], 00:33:47.716 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:33:47.716 | 70.00th=[ 701], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 791], 00:33:47.716 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 4293], 99.95th=[ 4293], 00:33:47.716 | 99.99th=[ 4293] 00:33:47.717 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:33:47.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:47.717 lat (usec) : 250=0.08%, 500=9.85%, 750=40.23%, 1000=24.83% 00:33:47.717 lat (msec) : 2=24.83%, 10=0.17% 00:33:47.717 cpu : usr=3.40%, sys=3.70%, ctx=1208, majf=0, minf=1 00:33:47.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.717 issued rwts: total=512,696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:47.717 job2: (groupid=0, jobs=1): err= 0: pid=2999010: Thu Dec 5 14:23:53 2024 00:33:47.717 read: IOPS=462, BW=1848KiB/s (1893kB/s)(1852KiB/1002msec) 00:33:47.717 slat (nsec): min=6918, max=63736, avg=25528.19, stdev=4181.32 00:33:47.717 clat (usec): min=346, max=1575, avg=1254.78, stdev=185.01 00:33:47.717 lat (usec): min=372, max=1601, avg=1280.31, stdev=185.71 00:33:47.717 clat percentiles (usec): 00:33:47.717 | 1.00th=[ 660], 5.00th=[ 938], 10.00th=[ 1029], 20.00th=[ 1123], 00:33:47.717 | 30.00th=[ 1156], 40.00th=[ 1221], 50.00th=[ 1303], 60.00th=[ 1352], 00:33:47.717 | 70.00th=[ 1385], 80.00th=[ 1418], 90.00th=[ 1450], 95.00th=[ 1483], 00:33:47.717 | 99.00th=[ 1532], 99.50th=[ 1549], 99.90th=[ 1582], 99.95th=[ 1582], 00:33:47.717 | 99.99th=[ 1582] 00:33:47.717 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:33:47.717 slat (nsec): min=9474, max=68084, avg=31218.92, stdev=6403.16 00:33:47.717 clat (usec): min=202, max=1066, avg=749.99, stdev=164.19 00:33:47.717 lat (usec): min=212, max=1097, avg=781.21, stdev=165.88 00:33:47.717 clat percentiles (usec): 00:33:47.717 | 1.00th=[ 310], 5.00th=[ 433], 10.00th=[ 519], 20.00th=[ 619], 00:33:47.717 | 30.00th=[ 693], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 824], 00:33:47.717 | 70.00th=[ 857], 80.00th=[ 889], 90.00th=[ 938], 95.00th=[ 971], 00:33:47.717 | 99.00th=[ 1029], 99.50th=[ 1045], 99.90th=[ 1074], 99.95th=[ 1074], 00:33:47.717 | 99.99th=[ 1074] 00:33:47.717 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:33:47.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:47.717 lat (usec) : 250=0.10%, 500=5.33%, 750=18.87%, 1000=31.28% 00:33:47.717 lat (msec) : 2=44.41% 00:33:47.717 cpu : usr=1.20%, sys=3.10%, ctx=975, majf=0, minf=1 00:33:47.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.717 issued rwts: total=463,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:47.717 job3: (groupid=0, jobs=1): err= 0: pid=2999011: Thu Dec 5 14:23:53 2024 00:33:47.717 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:47.717 slat (nsec): min=7668, max=45198, avg=25630.68, stdev=2938.28 00:33:47.717 clat (usec): min=410, max=1561, avg=1141.40, stdev=141.32 00:33:47.717 lat (usec): min=436, max=1586, avg=1167.03, stdev=141.46 00:33:47.717 clat percentiles (usec): 00:33:47.717 | 1.00th=[ 701], 5.00th=[ 914], 10.00th=[ 1004], 20.00th=[ 1074], 00:33:47.717 | 30.00th=[ 1090], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:33:47.717 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1319], 95.00th=[ 1401], 00:33:47.717 | 99.00th=[ 1483], 99.50th=[ 1549], 99.90th=[ 1565], 99.95th=[ 1565], 00:33:47.717 | 99.99th=[ 1565] 00:33:47.717 write: IOPS=555, BW=2222KiB/s (2275kB/s)(2224KiB/1001msec); 0 zone resets 00:33:47.717 slat (nsec): min=9582, max=64504, avg=30235.71, stdev=7236.31 00:33:47.717 clat (usec): min=215, max=1142, avg=677.67, stdev=147.25 00:33:47.717 lat (usec): min=247, max=1194, avg=707.90, stdev=148.96 00:33:47.717 clat percentiles (usec): 00:33:47.717 | 1.00th=[ 359], 5.00th=[ 420], 10.00th=[ 498], 20.00th=[ 545], 00:33:47.717 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 685], 60.00th=[ 717], 00:33:47.717 | 70.00th=[ 750], 80.00th=[ 791], 90.00th=[ 865], 95.00th=[ 922], 00:33:47.717 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[ 1139], 99.95th=[ 1139], 00:33:47.717 | 99.99th=[ 1139] 00:33:47.717 bw ( KiB/s): min= 4096, max= 4096, per=45.08%, avg=4096.00, stdev= 0.00, samples=1 00:33:47.717 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:47.717 lat (usec) : 250=0.09%, 500=5.43%, 750=31.74%, 1000=18.45% 00:33:47.717 lat (msec) : 2=44.29% 00:33:47.717 cpu : usr=1.40%, sys=3.40%, ctx=1068, majf=0, minf=1 00:33:47.717 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:47.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:47.717 issued rwts: total=512,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:47.717 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:47.717 00:33:47.717 Run status group 0 (all jobs): 00:33:47.717 READ: bw=6004KiB/s (6148kB/s), 67.9KiB/s-2046KiB/s (69.6kB/s-2095kB/s), io=6016KiB (6160kB), run=1001-1002msec 00:33:47.717 WRITE: bw=9086KiB/s (9304kB/s), 2044KiB/s-2781KiB/s (2093kB/s-2848kB/s), io=9104KiB (9322kB), run=1001-1002msec 00:33:47.717 00:33:47.717 Disk stats (read/write): 00:33:47.717 nvme0n1: ios=63/512, merge=0/0, ticks=564/298, in_queue=862, util=87.58% 00:33:47.717 nvme0n2: ios=490/512, merge=0/0, ticks=476/254, in_queue=730, util=86.25% 00:33:47.717 nvme0n3: ios=320/512, merge=0/0, ticks=413/357, in_queue=770, util=88.41% 00:33:47.717 nvme0n4: ios=435/512, merge=0/0, ticks=631/329, in_queue=960, util=92.00% 00:33:47.717 14:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:47.717 [global] 00:33:47.717 thread=1 00:33:47.717 invalidate=1 00:33:47.717 rw=write 00:33:47.717 time_based=1 00:33:47.717 runtime=1 00:33:47.717 ioengine=libaio 00:33:47.717 direct=1 00:33:47.717 bs=4096 00:33:47.717 iodepth=128 00:33:47.717 norandommap=0 00:33:47.717 numjobs=1 00:33:47.717 00:33:47.717 verify_dump=1 00:33:47.717 verify_backlog=512 00:33:47.717 verify_state_save=0 00:33:47.717 do_verify=1 00:33:47.717 verify=crc32c-intel 00:33:47.717 [job0] 00:33:47.717 filename=/dev/nvme0n1 00:33:47.717 [job1] 00:33:47.717 filename=/dev/nvme0n2 00:33:47.717 [job2] 00:33:47.717 filename=/dev/nvme0n3 00:33:47.717 [job3] 00:33:47.717 filename=/dev/nvme0n4 00:33:47.717 Could not set queue depth (nvme0n1) 00:33:47.717 Could not set queue depth (nvme0n2) 00:33:47.717 Could not set queue depth (nvme0n3) 00:33:47.717 Could not set queue depth (nvme0n4) 00:33:47.978 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:47.978 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:47.978 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:47.978 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:47.978 fio-3.35 00:33:47.978 Starting 4 threads 00:33:49.360 00:33:49.360 job0: (groupid=0, jobs=1): err= 0: pid=2999532: Thu Dec 5 14:23:55 2024 00:33:49.360 read: IOPS=5663, BW=22.1MiB/s (23.2MB/s)(22.4MiB/1011msec) 00:33:49.360 slat (nsec): min=999, max=10331k, avg=85104.82, stdev=620636.79 00:33:49.360 clat (usec): min=3726, max=55987, avg=10155.14, stdev=5730.94 00:33:49.360 lat (usec): min=3733, max=55990, avg=10240.24, stdev=5793.29 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 4359], 5.00th=[ 6063], 10.00th=[ 6390], 20.00th=[ 7242], 00:33:49.360 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9503], 00:33:49.360 | 70.00th=[10683], 80.00th=[11600], 90.00th=[13829], 95.00th=[16712], 00:33:49.360 | 99.00th=[40633], 99.50th=[49021], 99.90th=[55313], 99.95th=[55837], 00:33:49.360 | 99.99th=[55837] 00:33:49.360 write: IOPS=6077, BW=23.7MiB/s (24.9MB/s)(24.0MiB/1011msec); 0 zone resets 00:33:49.360 slat (nsec): min=1690, max=9021.3k, avg=78982.76, stdev=478403.81 00:33:49.360 clat (usec): min=1186, max=57696, avg=11397.83, stdev=9717.61 00:33:49.360 lat (usec): min=1197, max=57699, avg=11476.81, stdev=9772.97 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 3556], 5.00th=[ 4490], 10.00th=[ 5276], 20.00th=[ 6194], 00:33:49.360 | 30.00th=[ 6587], 40.00th=[ 7177], 50.00th=[ 7701], 60.00th=[ 8455], 00:33:49.360 | 70.00th=[ 9110], 80.00th=[11207], 90.00th=[29492], 95.00th=[34866], 00:33:49.360 | 99.00th=[44827], 99.50th=[55837], 99.90th=[57410], 99.95th=[57934], 00:33:49.360 | 99.99th=[57934] 00:33:49.360 bw ( KiB/s): min=21488, max=27392, per=23.95%, avg=24440.00, stdev=4174.76, samples=2 00:33:49.360 iops : min= 5372, max= 6848, avg=6110.00, stdev=1043.69, samples=2 00:33:49.360 lat (msec) : 2=0.02%, 4=0.76%, 10=69.09%, 20=20.43%, 50=9.24% 00:33:49.360 lat (msec) : 100=0.46% 00:33:49.360 cpu : usr=3.86%, sys=6.04%, ctx=523, majf=0, minf=1 00:33:49.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:49.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:49.360 issued rwts: total=5726,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:49.360 job1: (groupid=0, jobs=1): err= 0: pid=2999533: Thu Dec 5 14:23:55 2024 00:33:49.360 read: IOPS=6250, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec) 00:33:49.360 slat (nsec): min=1029, max=15767k, avg=67331.20, stdev=578770.06 00:33:49.360 clat (usec): min=899, max=32769, avg=9235.05, stdev=3701.73 00:33:49.360 lat (usec): min=3991, max=32799, avg=9302.38, stdev=3741.84 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6521], 00:33:49.360 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 8094], 60.00th=[ 8848], 00:33:49.360 | 70.00th=[10028], 80.00th=[11600], 90.00th=[14353], 95.00th=[17433], 00:33:49.360 | 99.00th=[21103], 99.50th=[21627], 99.90th=[32113], 99.95th=[32113], 00:33:49.360 | 99.99th=[32900] 00:33:49.360 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:33:49.360 slat (nsec): min=1690, max=42216k, avg=81838.85, stdev=851663.91 00:33:49.360 clat (usec): min=1445, max=82870, avg=9275.02, stdev=8620.44 00:33:49.360 lat (usec): min=1455, max=82919, avg=9356.86, stdev=8703.20 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 3228], 5.00th=[ 4178], 10.00th=[ 4621], 20.00th=[ 6063], 00:33:49.360 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:33:49.360 | 70.00th=[ 7963], 80.00th=[ 9372], 90.00th=[12911], 95.00th=[23987], 00:33:49.360 | 99.00th=[53740], 99.50th=[59507], 99.90th=[66323], 99.95th=[66323], 00:33:49.360 | 99.99th=[83362] 00:33:49.360 bw ( KiB/s): min=23168, max=30056, per=26.07%, avg=26612.00, stdev=4870.55, samples=2 00:33:49.360 iops : min= 5792, max= 7514, avg=6653.00, stdev=1217.64, samples=2 00:33:49.360 lat (usec) : 1000=0.01% 00:33:49.360 lat (msec) : 2=0.15%, 4=2.04%, 10=75.04%, 20=18.15%, 50=3.85% 00:33:49.360 lat (msec) : 100=0.76% 00:33:49.360 cpu : usr=4.89%, sys=6.79%, ctx=504, majf=0, minf=1 00:33:49.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:49.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:49.360 issued rwts: total=6269,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:49.360 job2: (groupid=0, jobs=1): err= 0: pid=2999534: Thu Dec 5 14:23:55 2024 00:33:49.360 read: IOPS=7140, BW=27.9MiB/s (29.2MB/s)(28.1MiB/1007msec) 00:33:49.360 slat (nsec): min=986, max=8015.3k, avg=66008.43, stdev=517874.32 00:33:49.360 clat (usec): min=1802, max=26646, avg=9117.35, stdev=2669.17 00:33:49.360 lat (usec): min=1842, max=26649, avg=9183.36, stdev=2702.88 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7439], 00:33:49.360 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8848], 00:33:49.360 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[12518], 95.00th=[14222], 00:33:49.360 | 99.00th=[19792], 99.50th=[20055], 99.90th=[23200], 99.95th=[23200], 00:33:49.360 | 99.99th=[26608] 00:33:49.360 write: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec); 0 zone resets 00:33:49.360 slat (nsec): min=1635, max=11518k, avg=59245.89, stdev=479739.51 00:33:49.360 clat (usec): min=877, max=26485, avg=8069.94, stdev=2835.33 00:33:49.360 lat (usec): min=886, max=26608, avg=8129.19, stdev=2861.29 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 3195], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 5997], 00:33:49.360 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 7832], 60.00th=[ 8160], 00:33:49.360 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[11207], 95.00th=[13042], 00:33:49.360 | 99.00th=[18482], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:33:49.360 | 99.99th=[26608] 00:33:49.360 bw ( KiB/s): min=27840, max=32760, per=29.69%, avg=30300.00, stdev=3478.97, samples=2 00:33:49.360 iops : min= 6960, max= 8190, avg=7575.00, stdev=869.74, samples=2 00:33:49.360 lat (usec) : 1000=0.02% 00:33:49.360 lat (msec) : 2=0.13%, 4=1.31%, 10=79.93%, 20=17.77%, 50=0.83% 00:33:49.360 cpu : usr=5.57%, sys=8.35%, ctx=372, majf=0, minf=1 00:33:49.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:33:49.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:49.360 issued rwts: total=7190,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:49.360 job3: (groupid=0, jobs=1): err= 0: pid=2999535: Thu Dec 5 14:23:55 2024 00:33:49.360 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:33:49.360 slat (nsec): min=1032, max=47902k, avg=96258.43, stdev=972374.17 00:33:49.360 clat (usec): min=2191, max=75588, avg=13302.47, stdev=9765.76 00:33:49.360 lat (usec): min=2201, max=75634, avg=13398.73, stdev=9842.75 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 2933], 5.00th=[ 5276], 10.00th=[ 7504], 20.00th=[ 8717], 00:33:49.360 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10552], 60.00th=[11076], 00:33:49.360 | 70.00th=[12125], 80.00th=[14091], 90.00th=[22938], 95.00th=[31065], 00:33:49.360 | 99.00th=[64750], 99.50th=[64750], 99.90th=[69731], 99.95th=[69731], 00:33:49.360 | 99.99th=[76022] 00:33:49.360 write: IOPS=5306, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1002msec); 0 zone resets 00:33:49.360 slat (nsec): min=1664, max=13469k, avg=79995.82, stdev=627647.61 00:33:49.360 clat (usec): min=709, max=62517, avg=11006.79, stdev=5708.40 00:33:49.360 lat (usec): min=3190, max=62524, avg=11086.78, stdev=5737.79 00:33:49.360 clat percentiles (usec): 00:33:49.360 | 1.00th=[ 4146], 5.00th=[ 5014], 10.00th=[ 5735], 20.00th=[ 7373], 00:33:49.360 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:33:49.360 | 70.00th=[11207], 80.00th=[13042], 90.00th=[17957], 95.00th=[20579], 00:33:49.360 | 99.00th=[26870], 99.50th=[39060], 99.90th=[62653], 99.95th=[62653], 00:33:49.360 | 99.99th=[62653] 00:33:49.360 bw ( KiB/s): min=17664, max=23856, per=20.34%, avg=20760.00, stdev=4378.41, samples=2 00:33:49.360 iops : min= 4416, max= 5964, avg=5190.00, stdev=1094.60, samples=2 00:33:49.360 lat (usec) : 750=0.01% 00:33:49.360 lat (msec) : 4=1.00%, 10=46.31%, 20=42.86%, 50=8.61%, 100=1.22% 00:33:49.360 cpu : usr=4.10%, sys=6.39%, ctx=313, majf=0, minf=1 00:33:49.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:49.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:49.360 issued rwts: total=5120,5317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:49.360 00:33:49.360 Run status group 0 (all jobs): 00:33:49.360 READ: bw=93.9MiB/s (98.5MB/s), 20.0MiB/s-27.9MiB/s (20.9MB/s-29.2MB/s), io=94.9MiB (99.6MB), run=1002-1011msec 00:33:49.360 WRITE: bw=99.7MiB/s (105MB/s), 20.7MiB/s-29.8MiB/s (21.7MB/s-31.2MB/s), io=101MiB (106MB), run=1002-1011msec 00:33:49.360 00:33:49.360 Disk stats (read/write): 00:33:49.360 nvme0n1: ios=4630/4969, merge=0/0, ticks=45503/57101, in_queue=102604, util=84.47% 00:33:49.360 nvme0n2: ios=5678/6065, merge=0/0, ticks=47178/44574, in_queue=91752, util=90.62% 00:33:49.360 nvme0n3: ios=6204/6404, merge=0/0, ticks=49833/41791, in_queue=91624, util=92.83% 00:33:49.360 nvme0n4: ios=4143/4102, merge=0/0, ticks=35474/25562, in_queue=61036, util=94.46% 00:33:49.360 14:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:49.360 [global] 00:33:49.360 thread=1 00:33:49.360 invalidate=1 00:33:49.360 rw=randwrite 00:33:49.361 time_based=1 00:33:49.361 runtime=1 00:33:49.361 ioengine=libaio 00:33:49.361 direct=1 00:33:49.361 bs=4096 00:33:49.361 iodepth=128 00:33:49.361 norandommap=0 00:33:49.361 numjobs=1 00:33:49.361 00:33:49.361 verify_dump=1 00:33:49.361 verify_backlog=512 00:33:49.361 verify_state_save=0 00:33:49.361 do_verify=1 00:33:49.361 verify=crc32c-intel 00:33:49.361 [job0] 00:33:49.361 filename=/dev/nvme0n1 00:33:49.361 [job1] 00:33:49.361 filename=/dev/nvme0n2 00:33:49.361 [job2] 00:33:49.361 filename=/dev/nvme0n3 00:33:49.361 [job3] 00:33:49.361 filename=/dev/nvme0n4 00:33:49.361 Could not set queue depth (nvme0n1) 00:33:49.361 Could not set queue depth (nvme0n2) 00:33:49.361 Could not set queue depth (nvme0n3) 00:33:49.361 Could not set queue depth (nvme0n4) 00:33:49.620 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:49.620 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:49.620 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:49.620 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:49.620 fio-3.35 00:33:49.620 Starting 4 threads 00:33:51.004 00:33:51.004 job0: (groupid=0, jobs=1): err= 0: pid=2999980: Thu Dec 5 14:23:57 2024 00:33:51.004 read: IOPS=4923, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1003msec) 00:33:51.004 slat (nsec): min=929, max=43906k, avg=112111.23, stdev=972842.96 00:33:51.004 clat (usec): min=1221, max=56835, avg=14101.89, stdev=10941.54 00:33:51.004 lat (usec): min=3194, max=56861, avg=14214.00, stdev=11024.04 00:33:51.004 clat percentiles (usec): 00:33:51.004 | 1.00th=[ 4490], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8160], 00:33:51.004 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765], 00:33:51.004 | 70.00th=[10552], 80.00th=[16319], 90.00th=[34341], 95.00th=[40633], 00:33:51.004 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:33:51.004 | 99.99th=[56886] 00:33:51.004 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:33:51.004 slat (nsec): min=1576, max=13454k, avg=82229.45, stdev=569144.60 00:33:51.004 clat (usec): min=2948, max=53046, avg=11209.14, stdev=8359.92 00:33:51.004 lat (usec): min=2955, max=53061, avg=11291.37, stdev=8402.98 00:33:51.004 clat percentiles (usec): 00:33:51.004 | 1.00th=[ 3556], 5.00th=[ 5211], 10.00th=[ 5800], 20.00th=[ 6783], 00:33:51.004 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8717], 00:33:51.004 | 70.00th=[ 9765], 80.00th=[12649], 90.00th=[20055], 95.00th=[31065], 00:33:51.004 | 99.00th=[50070], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:33:51.004 | 99.99th=[53216] 00:33:51.004 bw ( KiB/s): min=18320, max=22640, per=20.91%, avg=20480.00, stdev=3054.70, samples=2 00:33:51.004 iops : min= 4580, max= 5660, avg=5120.00, stdev=763.68, samples=2 00:33:51.004 lat (msec) : 2=0.01%, 4=0.85%, 10=66.76%, 20=18.34%, 50=13.12% 00:33:51.004 lat (msec) : 100=0.91% 00:33:51.004 cpu : usr=2.59%, sys=3.99%, ctx=488, majf=0, minf=1 00:33:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.004 issued rwts: total=4938,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.004 job1: (groupid=0, jobs=1): err= 0: pid=3000000: Thu Dec 5 14:23:57 2024 00:33:51.004 read: IOPS=5833, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1004msec) 00:33:51.004 slat (nsec): min=883, max=9372.9k, avg=81965.28, stdev=457837.36 00:33:51.004 clat (usec): min=1045, max=25262, avg=10294.80, stdev=2640.86 00:33:51.004 lat (usec): min=5184, max=25268, avg=10376.77, stdev=2647.90 00:33:51.004 clat percentiles (usec): 00:33:51.004 | 1.00th=[ 5604], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8455], 00:33:51.004 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:33:51.004 | 70.00th=[10683], 80.00th=[11863], 90.00th=[13042], 95.00th=[15926], 00:33:51.004 | 99.00th=[19530], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:33:51.004 | 99.99th=[25297] 00:33:51.004 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:33:51.004 slat (nsec): min=1485, max=7655.7k, avg=82617.59, stdev=437233.33 00:33:51.004 clat (usec): min=1132, max=48488, avg=10907.91, stdev=5737.01 00:33:51.004 lat (usec): min=1140, max=48496, avg=10990.52, stdev=5764.26 00:33:51.004 clat percentiles (usec): 00:33:51.004 | 1.00th=[ 5932], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 8455], 00:33:51.004 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:51.004 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[16057], 95.00th=[22938], 00:33:51.004 | 99.00th=[39060], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:33:51.004 | 99.99th=[48497] 00:33:51.004 bw ( KiB/s): min=22904, max=26248, per=25.09%, avg=24576.00, stdev=2364.57, samples=2 00:33:51.004 iops : min= 5726, max= 6562, avg=6144.00, stdev=591.14, samples=2 00:33:51.004 lat (msec) : 2=0.26%, 4=0.07%, 10=64.49%, 20=31.04%, 50=4.15% 00:33:51.004 cpu : usr=1.69%, sys=2.99%, ctx=669, majf=0, minf=1 00:33:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.004 issued rwts: total=5857,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.004 job2: (groupid=0, jobs=1): err= 0: pid=3000027: Thu Dec 5 14:23:57 2024 00:33:51.004 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:33:51.004 slat (nsec): min=955, max=8231.5k, avg=78900.74, stdev=552978.47 00:33:51.004 clat (usec): min=2050, max=18514, avg=10163.33, stdev=2319.96 00:33:51.004 lat (usec): min=2092, max=20241, avg=10242.23, stdev=2354.78 00:33:51.004 clat percentiles (usec): 00:33:51.004 | 1.00th=[ 5866], 5.00th=[ 6915], 10.00th=[ 7570], 20.00th=[ 8225], 00:33:51.004 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10290], 00:33:51.004 | 70.00th=[11076], 80.00th=[11994], 90.00th=[13173], 95.00th=[14877], 00:33:51.004 | 99.00th=[16450], 99.50th=[17433], 99.90th=[18482], 99.95th=[18482], 00:33:51.004 | 99.99th=[18482] 00:33:51.004 write: IOPS=6619, BW=25.9MiB/s (27.1MB/s)(25.9MiB/1003msec); 0 zone resets 00:33:51.004 slat (nsec): min=1592, max=11728k, avg=71611.29, stdev=491998.91 00:33:51.004 clat (usec): min=1104, max=57730, avg=9331.97, stdev=4960.89 00:33:51.004 lat (usec): min=1107, max=57734, avg=9403.58, stdev=4999.01 00:33:51.004 clat percentiles (usec): 00:33:51.005 | 1.00th=[ 4424], 5.00th=[ 5080], 10.00th=[ 6587], 20.00th=[ 7439], 00:33:51.005 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:33:51.005 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[11731], 95.00th=[12911], 00:33:51.005 | 99.00th=[39060], 99.50th=[54789], 99.90th=[56886], 99.95th=[57934], 00:33:51.005 | 99.99th=[57934] 00:33:51.005 bw ( KiB/s): min=24576, max=27520, per=26.59%, avg=26048.00, stdev=2081.72, samples=2 00:33:51.005 iops : min= 6144, max= 6880, avg=6512.00, stdev=520.43, samples=2 00:33:51.005 lat (msec) : 2=0.16%, 4=0.55%, 10=65.46%, 20=33.11%, 50=0.39% 00:33:51.005 lat (msec) : 100=0.33% 00:33:51.005 cpu : usr=3.89%, sys=5.29%, ctx=555, majf=0, minf=2 00:33:51.005 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:33:51.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.005 issued rwts: total=6144,6639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.005 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.005 job3: (groupid=0, jobs=1): err= 0: pid=3000038: Thu Dec 5 14:23:57 2024 00:33:51.005 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:33:51.005 slat (nsec): min=956, max=5418.8k, avg=72023.87, stdev=421333.46 00:33:51.005 clat (usec): min=3553, max=28286, avg=9656.47, stdev=2195.07 00:33:51.005 lat (usec): min=3560, max=28291, avg=9728.49, stdev=2220.86 00:33:51.005 clat percentiles (usec): 00:33:51.005 | 1.00th=[ 5407], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 8094], 00:33:51.005 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:33:51.005 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11994], 95.00th=[12780], 00:33:51.005 | 99.00th=[16712], 99.50th=[22414], 99.90th=[24511], 99.95th=[24511], 00:33:51.005 | 99.99th=[28181] 00:33:51.005 write: IOPS=6661, BW=26.0MiB/s (27.3MB/s)(26.1MiB/1003msec); 0 zone resets 00:33:51.005 slat (nsec): min=1604, max=13497k, avg=70437.44, stdev=416943.79 00:33:51.005 clat (usec): min=852, max=30015, avg=9387.78, stdev=3227.67 00:33:51.005 lat (usec): min=3392, max=30019, avg=9458.22, stdev=3252.29 00:33:51.005 clat percentiles (usec): 00:33:51.005 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 6783], 20.00th=[ 7570], 00:33:51.005 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9372], 00:33:51.005 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[12125], 95.00th=[14877], 00:33:51.005 | 99.00th=[26084], 99.50th=[26608], 99.90th=[28181], 99.95th=[28181], 00:33:51.005 | 99.99th=[30016] 00:33:51.005 bw ( KiB/s): min=24576, max=28672, per=27.18%, avg=26624.00, stdev=2896.31, samples=2 00:33:51.005 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:33:51.005 lat (usec) : 1000=0.01% 00:33:51.005 lat (msec) : 4=0.30%, 10=65.93%, 20=32.30%, 50=1.46% 00:33:51.005 cpu : usr=4.49%, sys=5.39%, ctx=737, majf=0, minf=1 00:33:51.005 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:33:51.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:51.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:51.005 issued rwts: total=6656,6681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:51.005 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:51.005 00:33:51.005 Run status group 0 (all jobs): 00:33:51.005 READ: bw=91.8MiB/s (96.3MB/s), 19.2MiB/s-25.9MiB/s (20.2MB/s-27.2MB/s), io=92.2MiB (96.6MB), run=1003-1004msec 00:33:51.005 WRITE: bw=95.6MiB/s (100MB/s), 19.9MiB/s-26.0MiB/s (20.9MB/s-27.3MB/s), io=96.0MiB (101MB), run=1003-1004msec 00:33:51.005 00:33:51.005 Disk stats (read/write): 00:33:51.005 nvme0n1: ios=3452/3584, merge=0/0, ticks=21926/16018, in_queue=37944, util=87.68% 00:33:51.005 nvme0n2: ios=4658/4644, merge=0/0, ticks=15482/21907, in_queue=37389, util=86.36% 00:33:51.005 nvme0n3: ios=4713/5120, merge=0/0, ticks=33277/32809, in_queue=66086, util=96.26% 00:33:51.005 nvme0n4: ios=5177/5545, merge=0/0, ticks=23556/25268, in_queue=48824, util=96.06% 00:33:51.005 14:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:51.005 14:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3000099 00:33:51.005 14:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:51.005 14:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:51.005 [global] 00:33:51.005 thread=1 00:33:51.005 invalidate=1 00:33:51.005 rw=read 00:33:51.005 time_based=1 00:33:51.005 runtime=10 00:33:51.005 ioengine=libaio 00:33:51.005 direct=1 00:33:51.005 bs=4096 00:33:51.005 iodepth=1 00:33:51.005 norandommap=1 00:33:51.005 numjobs=1 00:33:51.005 00:33:51.005 [job0] 00:33:51.005 filename=/dev/nvme0n1 00:33:51.005 [job1] 00:33:51.005 filename=/dev/nvme0n2 00:33:51.005 [job2] 00:33:51.005 filename=/dev/nvme0n3 00:33:51.005 [job3] 00:33:51.005 filename=/dev/nvme0n4 00:33:51.005 Could not set queue depth (nvme0n1) 00:33:51.005 Could not set queue depth (nvme0n2) 00:33:51.005 Could not set queue depth (nvme0n3) 00:33:51.005 Could not set queue depth (nvme0n4) 00:33:51.575 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.575 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.575 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.575 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:51.575 fio-3.35 00:33:51.575 Starting 4 threads 00:33:54.113 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:54.113 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=266240, buflen=4096 00:33:54.113 fio: pid=3000492, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:54.113 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:54.374 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:33:54.374 fio: pid=3000486, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:54.374 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:54.374 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:54.634 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=294912, buflen=4096 00:33:54.634 fio: pid=3000458, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:54.634 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:54.634 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:54.635 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:33:54.635 fio: pid=3000469, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:54.635 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:54.635 14:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:54.894 00:33:54.894 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3000458: Thu Dec 5 14:24:00 2024 00:33:54.894 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2937msec) 00:33:54.894 slat (usec): min=24, max=11652, avg=331.88, stdev=1835.39 00:33:54.894 clat (usec): min=848, max=41899, avg=40433.98, stdev=4732.95 00:33:54.894 lat (usec): min=890, max=52975, avg=40770.06, stdev=5132.94 00:33:54.894 clat percentiles (usec): 00:33:54.894 | 1.00th=[ 848], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:33:54.894 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:54.894 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:54.894 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:33:54.894 | 99.99th=[41681] 00:33:54.894 bw ( KiB/s): min= 96, max= 104, per=27.78%, avg=99.20, stdev= 4.38, samples=5 00:33:54.894 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:33:54.894 lat (usec) : 1000=1.37% 00:33:54.894 lat (msec) : 50=97.26% 00:33:54.894 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:33:54.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.894 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.894 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.894 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3000469: Thu Dec 5 14:24:00 2024 00:33:54.894 read: IOPS=24, BW=96.7KiB/s (99.0kB/s)(304KiB/3143msec) 00:33:54.894 slat (usec): min=9, max=146, avg=29.90, stdev=22.66 00:33:54.894 clat (usec): min=1016, max=44903, avg=41252.55, stdev=4710.11 00:33:54.894 lat (usec): min=1054, max=44929, avg=41281.11, stdev=4708.88 00:33:54.894 clat percentiles (usec): 00:33:54.894 | 1.00th=[ 1020], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:54.894 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:33:54.894 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:54.894 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:33:54.894 | 99.99th=[44827] 00:33:54.894 bw ( KiB/s): min= 96, max= 96, per=26.94%, avg=96.00, stdev= 0.00, samples=6 00:33:54.894 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:33:54.894 lat (msec) : 2=1.30%, 50=97.40% 00:33:54.894 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=2 00:33:54.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.894 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.894 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.894 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3000486: Thu Dec 5 14:24:00 2024 00:33:54.894 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(268KiB/2770msec) 00:33:54.894 slat (usec): min=24, max=217, avg=28.16, stdev=23.31 00:33:54.894 clat (usec): min=1056, max=42057, avg=41291.89, stdev=4995.92 00:33:54.894 lat (usec): min=1091, max=42082, avg=41320.09, stdev=4994.99 00:33:54.894 clat percentiles (usec): 00:33:54.894 | 1.00th=[ 1057], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:33:54.894 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:33:54.894 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:54.894 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:54.894 | 99.99th=[42206] 00:33:54.894 bw ( KiB/s): min= 96, max= 96, per=26.94%, avg=96.00, stdev= 0.00, samples=5 00:33:54.894 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:33:54.894 lat (msec) : 2=1.47%, 50=97.06% 00:33:54.894 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:33:54.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.894 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.895 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.895 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3000492: Thu Dec 5 14:24:00 2024 00:33:54.895 read: IOPS=25, BW=101KiB/s (103kB/s)(260KiB/2585msec) 00:33:54.895 slat (nsec): min=8425, max=60283, avg=26938.14, stdev=4779.32 00:33:54.895 clat (usec): min=515, max=41064, avg=39719.27, stdev=7022.69 00:33:54.895 lat (usec): min=544, max=41091, avg=39746.20, stdev=7019.53 00:33:54.895 clat percentiles (usec): 00:33:54.895 | 1.00th=[ 515], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:54.895 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:54.895 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:54.895 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:54.895 | 99.99th=[41157] 00:33:54.895 bw ( KiB/s): min= 96, max= 104, per=28.06%, avg=100.80, stdev= 4.38, samples=5 00:33:54.895 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:33:54.895 lat (usec) : 750=3.03% 00:33:54.895 lat (msec) : 50=95.45% 00:33:54.895 cpu : usr=0.12%, sys=0.00%, ctx=67, majf=0, minf=2 00:33:54.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:54.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.895 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:54.895 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:54.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:54.895 00:33:54.895 Run status group 0 (all jobs): 00:33:54.895 READ: bw=356KiB/s (365kB/s), 96.7KiB/s-101KiB/s (99.0kB/s-103kB/s), io=1120KiB (1147kB), run=2585-3143msec 00:33:54.895 00:33:54.895 Disk stats (read/write): 00:33:54.895 nvme0n1: ios=69/0, merge=0/0, ticks=2790/0, in_queue=2790, util=94.02% 00:33:54.895 nvme0n2: ios=74/0, merge=0/0, ticks=3054/0, in_queue=3054, util=95.66% 00:33:54.895 nvme0n3: ios=62/0, merge=0/0, ticks=2559/0, in_queue=2559, util=95.99% 00:33:54.895 nvme0n4: ios=58/0, merge=0/0, ticks=2297/0, in_queue=2297, util=96.06% 00:33:54.895 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:54.895 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:55.154 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:55.154 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:55.411 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:55.411 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:55.411 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:55.411 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3000099 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:55.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:55.669 nvmf hotplug test: fio failed as expected 00:33:55.669 14:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:55.928 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.929 rmmod nvme_tcp 00:33:55.929 rmmod nvme_fabrics 00:33:55.929 rmmod nvme_keyring 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2996906 ']' 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2996906 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2996906 ']' 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2996906 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.929 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996906 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996906' 00:33:56.188 killing process with pid 2996906 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2996906 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2996906 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.188 14:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:58.728 00:33:58.728 real 0m28.291s 00:33:58.728 user 2m15.117s 00:33:58.728 sys 0m11.748s 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.728 ************************************ 00:33:58.728 END TEST nvmf_fio_target 00:33:58.728 ************************************ 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:58.728 ************************************ 00:33:58.728 START TEST nvmf_bdevio 00:33:58.728 ************************************ 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:58.728 * Looking for test storage... 00:33:58.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.728 --rc genhtml_branch_coverage=1 00:33:58.728 --rc genhtml_function_coverage=1 00:33:58.728 --rc genhtml_legend=1 00:33:58.728 --rc geninfo_all_blocks=1 00:33:58.728 --rc geninfo_unexecuted_blocks=1 00:33:58.728 00:33:58.728 ' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.728 --rc genhtml_branch_coverage=1 00:33:58.728 --rc genhtml_function_coverage=1 00:33:58.728 --rc genhtml_legend=1 00:33:58.728 --rc geninfo_all_blocks=1 00:33:58.728 --rc geninfo_unexecuted_blocks=1 00:33:58.728 00:33:58.728 ' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.728 --rc genhtml_branch_coverage=1 00:33:58.728 --rc genhtml_function_coverage=1 00:33:58.728 --rc genhtml_legend=1 00:33:58.728 --rc geninfo_all_blocks=1 00:33:58.728 --rc geninfo_unexecuted_blocks=1 00:33:58.728 00:33:58.728 ' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:58.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.728 --rc genhtml_branch_coverage=1 00:33:58.728 --rc genhtml_function_coverage=1 00:33:58.728 --rc genhtml_legend=1 00:33:58.728 --rc geninfo_all_blocks=1 00:33:58.728 --rc geninfo_unexecuted_blocks=1 00:33:58.728 00:33:58.728 ' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.728 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.729 14:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.865 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:06.866 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:06.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:06.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:06.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.866 14:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:06.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:06.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:34:06.866 00:34:06.866 --- 10.0.0.2 ping statistics --- 00:34:06.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.866 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:06.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:06.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:34:06.866 00:34:06.866 --- 10.0.0.1 ping statistics --- 00:34:06.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:06.866 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:06.866 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3006087 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3006087 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3006087 ']' 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:06.867 14:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:06.867 [2024-12-05 14:24:12.356922] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:06.867 [2024-12-05 14:24:12.358035] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:34:06.867 [2024-12-05 14:24:12.358080] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.867 [2024-12-05 14:24:12.457851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:06.867 [2024-12-05 14:24:12.510404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.867 [2024-12-05 14:24:12.510470] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.867 [2024-12-05 14:24:12.510479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.867 [2024-12-05 14:24:12.510487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.867 [2024-12-05 14:24:12.510493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.867 [2024-12-05 14:24:12.512542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:06.867 [2024-12-05 14:24:12.512874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:06.867 [2024-12-05 14:24:12.513011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:06.867 [2024-12-05 14:24:12.513014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:06.867 [2024-12-05 14:24:12.593788] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:06.867 [2024-12-05 14:24:12.594733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:06.867 [2024-12-05 14:24:12.594999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:06.867 [2024-12-05 14:24:12.595487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:06.867 [2024-12-05 14:24:12.595538] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.127 [2024-12-05 14:24:13.218034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.127 Malloc0 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.127 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:07.128 [2024-12-05 14:24:13.310351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.128 { 00:34:07.128 "params": { 00:34:07.128 "name": "Nvme$subsystem", 00:34:07.128 "trtype": "$TEST_TRANSPORT", 00:34:07.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.128 "adrfam": "ipv4", 00:34:07.128 "trsvcid": "$NVMF_PORT", 00:34:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.128 "hdgst": ${hdgst:-false}, 00:34:07.128 "ddgst": ${ddgst:-false} 00:34:07.128 }, 00:34:07.128 "method": "bdev_nvme_attach_controller" 00:34:07.128 } 00:34:07.128 EOF 00:34:07.128 )") 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:07.128 14:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.128 "params": { 00:34:07.128 "name": "Nvme1", 00:34:07.128 "trtype": "tcp", 00:34:07.128 "traddr": "10.0.0.2", 00:34:07.128 "adrfam": "ipv4", 00:34:07.128 "trsvcid": "4420", 00:34:07.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:07.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:07.128 "hdgst": false, 00:34:07.128 "ddgst": false 00:34:07.128 }, 00:34:07.128 "method": "bdev_nvme_attach_controller" 00:34:07.128 }' 00:34:07.128 [2024-12-05 14:24:13.368892] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:34:07.128 [2024-12-05 14:24:13.368969] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006202 ] 00:34:07.388 [2024-12-05 14:24:13.462819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:07.388 [2024-12-05 14:24:13.520580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.388 [2024-12-05 14:24:13.520772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.388 [2024-12-05 14:24:13.520772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.388 I/O targets: 00:34:07.388 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:07.388 00:34:07.388 00:34:07.388 CUnit - A unit testing framework for C - Version 2.1-3 00:34:07.388 http://cunit.sourceforge.net/ 00:34:07.388 00:34:07.388 00:34:07.388 Suite: bdevio tests on: Nvme1n1 00:34:07.649 Test: blockdev write read block ...passed 00:34:07.649 Test: blockdev write zeroes read block ...passed 00:34:07.649 Test: blockdev write zeroes read no split ...passed 00:34:07.649 Test: blockdev write zeroes read split ...passed 00:34:07.649 Test: blockdev write zeroes read split partial ...passed 00:34:07.649 Test: blockdev reset ...[2024-12-05 14:24:13.895449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:07.649 [2024-12-05 14:24:13.895568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c3970 (9): Bad file descriptor 00:34:07.649 [2024-12-05 14:24:13.902728] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:07.649 passed 00:34:07.649 Test: blockdev write read 8 blocks ...passed 00:34:07.910 Test: blockdev write read size > 128k ...passed 00:34:07.910 Test: blockdev write read invalid size ...passed 00:34:07.910 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:07.910 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:07.910 Test: blockdev write read max offset ...passed 00:34:07.910 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:07.910 Test: blockdev writev readv 8 blocks ...passed 00:34:07.910 Test: blockdev writev readv 30 x 1block ...passed 00:34:07.910 Test: blockdev writev readv block ...passed 00:34:07.910 Test: blockdev writev readv size > 128k ...passed 00:34:07.910 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:07.910 Test: blockdev comparev and writev ...[2024-12-05 14:24:14.168590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.910 [2024-12-05 14:24:14.168639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:07.910 [2024-12-05 14:24:14.168656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.910 [2024-12-05 14:24:14.168665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:07.910 [2024-12-05 14:24:14.169250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.910 [2024-12-05 14:24:14.169262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:07.910 [2024-12-05 14:24:14.169277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.910 [2024-12-05 14:24:14.169285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:07.910 [2024-12-05 14:24:14.169869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.911 [2024-12-05 14:24:14.169882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:07.911 [2024-12-05 14:24:14.169896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.911 [2024-12-05 14:24:14.169905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:07.911 [2024-12-05 14:24:14.170497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.911 [2024-12-05 14:24:14.170508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:07.911 [2024-12-05 14:24:14.170522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:07.911 [2024-12-05 14:24:14.170531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:08.172 passed 00:34:08.172 Test: blockdev nvme passthru rw ...passed 00:34:08.172 Test: blockdev nvme passthru vendor specific ...[2024-12-05 14:24:14.254382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.172 [2024-12-05 14:24:14.254398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:08.172 [2024-12-05 14:24:14.254789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.172 [2024-12-05 14:24:14.254810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:08.173 [2024-12-05 14:24:14.255182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.173 [2024-12-05 14:24:14.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:08.173 [2024-12-05 14:24:14.255570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:08.173 [2024-12-05 14:24:14.255584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:08.173 passed 00:34:08.173 Test: blockdev nvme admin passthru ...passed 00:34:08.173 Test: blockdev copy ...passed 00:34:08.173 00:34:08.173 Run Summary: Type Total Ran Passed Failed Inactive 00:34:08.173 suites 1 1 n/a 0 0 00:34:08.173 tests 23 23 23 0 0 00:34:08.173 asserts 152 152 152 0 n/a 00:34:08.173 00:34:08.173 Elapsed time = 1.257 seconds 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:08.173 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:08.173 rmmod nvme_tcp 00:34:08.432 rmmod nvme_fabrics 00:34:08.432 rmmod nvme_keyring 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3006087 ']' 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3006087 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3006087 ']' 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3006087 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006087 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006087' 00:34:08.432 killing process with pid 3006087 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3006087 00:34:08.432 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3006087 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.691 14:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:10.599 14:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:10.599 00:34:10.599 real 0m12.315s 00:34:10.599 user 0m9.794s 00:34:10.599 sys 0m6.424s 00:34:10.599 14:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.599 14:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:10.599 ************************************ 00:34:10.599 END TEST nvmf_bdevio 00:34:10.599 ************************************ 00:34:10.859 14:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:10.859 00:34:10.859 real 5m0.430s 00:34:10.859 user 10m16.027s 00:34:10.859 sys 2m6.515s 00:34:10.859 14:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.859 14:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:10.859 ************************************ 00:34:10.859 END TEST nvmf_target_core_interrupt_mode 00:34:10.859 ************************************ 00:34:10.859 14:24:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:10.859 14:24:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:10.859 14:24:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.859 14:24:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:10.859 ************************************ 00:34:10.859 START TEST nvmf_interrupt 00:34:10.859 ************************************ 00:34:10.859 14:24:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:10.859 * Looking for test storage... 00:34:10.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:10.859 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:10.859 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:10.859 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:11.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.119 --rc genhtml_branch_coverage=1 00:34:11.119 --rc genhtml_function_coverage=1 00:34:11.119 --rc genhtml_legend=1 00:34:11.119 --rc geninfo_all_blocks=1 00:34:11.119 --rc geninfo_unexecuted_blocks=1 00:34:11.119 00:34:11.119 ' 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:11.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.119 --rc genhtml_branch_coverage=1 00:34:11.119 --rc genhtml_function_coverage=1 00:34:11.119 --rc genhtml_legend=1 00:34:11.119 --rc geninfo_all_blocks=1 00:34:11.119 --rc geninfo_unexecuted_blocks=1 00:34:11.119 00:34:11.119 ' 00:34:11.119 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:11.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.119 --rc genhtml_branch_coverage=1 00:34:11.119 --rc genhtml_function_coverage=1 00:34:11.119 --rc genhtml_legend=1 00:34:11.119 --rc geninfo_all_blocks=1 00:34:11.119 --rc geninfo_unexecuted_blocks=1 00:34:11.119 00:34:11.119 ' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:11.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:11.120 --rc genhtml_branch_coverage=1 00:34:11.120 --rc genhtml_function_coverage=1 00:34:11.120 --rc genhtml_legend=1 00:34:11.120 --rc geninfo_all_blocks=1 00:34:11.120 --rc geninfo_unexecuted_blocks=1 00:34:11.120 00:34:11.120 ' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:11.120 14:24:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:19.254 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:19.255 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:19.255 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:19.255 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:19.255 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:19.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:19.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:34:19.255 00:34:19.255 --- 10.0.0.2 ping statistics --- 00:34:19.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.255 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:19.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:19.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:34:19.255 00:34:19.255 --- 10.0.0.1 ping statistics --- 00:34:19.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:19.255 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3010560 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3010560 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3010560 ']' 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.255 14:24:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.255 [2024-12-05 14:24:24.639334] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:19.255 [2024-12-05 14:24:24.640464] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:34:19.255 [2024-12-05 14:24:24.640513] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.256 [2024-12-05 14:24:24.741386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:19.256 [2024-12-05 14:24:24.793226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.256 [2024-12-05 14:24:24.793277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.256 [2024-12-05 14:24:24.793286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.256 [2024-12-05 14:24:24.793294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.256 [2024-12-05 14:24:24.793300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.256 [2024-12-05 14:24:24.795113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.256 [2024-12-05 14:24:24.795118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.256 [2024-12-05 14:24:24.873202] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:19.256 [2024-12-05 14:24:24.873989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:19.256 [2024-12-05 14:24:24.874214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:19.256 5000+0 records in 00:34:19.256 5000+0 records out 00:34:19.256 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0186049 s, 550 MB/s 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.256 AIO0 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.256 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.514 [2024-12-05 14:24:25.552127] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:19.514 [2024-12-05 14:24:25.596450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3010560 0 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3010560 0 idle 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010560 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.31 reactor_0' 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010560 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.31 reactor_0 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3010560 1 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3010560 1 idle 00:34:19.514 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:19.515 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010579 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010579 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3010917 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3010560 0 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3010560 0 busy 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:19.773 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:19.774 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:19.774 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:19.774 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:19.774 14:24:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010560 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0' 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010560 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.32 reactor_0 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:20.033 14:24:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:20.975 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:20.975 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:20.975 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:20.975 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010560 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0' 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010560 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.53 reactor_0 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3010560 1 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3010560 1 busy 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:21.234 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010579 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.30 reactor_1' 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010579 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.30 reactor_1 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:21.552 14:24:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3010917 00:34:31.696 Initializing NVMe Controllers 00:34:31.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:31.696 Controller IO queue size 256, less than required. 00:34:31.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:31.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:31.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:31.696 Initialization complete. Launching workers. 00:34:31.696 ======================================================== 00:34:31.696 Latency(us) 00:34:31.696 Device Information : IOPS MiB/s Average min max 00:34:31.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19390.00 75.74 13207.54 4434.46 31474.58 00:34:31.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20125.70 78.62 12721.86 8026.90 27873.43 00:34:31.696 ======================================================== 00:34:31.696 Total : 39515.69 154.36 12960.18 4434.46 31474.58 00:34:31.696 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3010560 0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3010560 0 idle 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010560 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0' 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010560 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.31 reactor_0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3010560 1 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3010560 1 idle 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010579 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010579 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:31.696 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:31.697 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:31.697 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:31.697 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:31.697 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:31.697 14:24:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:31.697 14:24:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:31.697 14:24:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:31.697 14:24:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:31.697 14:24:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:31.697 14:24:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:31.697 14:24:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3010560 0 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3010560 0 idle 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:33.080 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010560 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0' 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010560 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.66 reactor_0 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3010560 1 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3010560 1 idle 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3010560 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3010560 -w 256 00:34:33.341 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3010579 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3010579 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:33.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.602 rmmod nvme_tcp 00:34:33.602 rmmod nvme_fabrics 00:34:33.602 rmmod nvme_keyring 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3010560 ']' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3010560 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3010560 ']' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3010560 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:33.602 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010560 00:34:33.864 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:33.864 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:33.864 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010560' 00:34:33.864 killing process with pid 3010560 00:34:33.864 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3010560 00:34:33.864 14:24:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3010560 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:33.864 14:24:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.411 14:24:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:36.411 00:34:36.411 real 0m25.158s 00:34:36.411 user 0m40.364s 00:34:36.411 sys 0m9.625s 00:34:36.411 14:24:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.411 14:24:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:36.411 ************************************ 00:34:36.411 END TEST nvmf_interrupt 00:34:36.411 ************************************ 00:34:36.411 00:34:36.411 real 29m58.289s 00:34:36.411 user 61m24.331s 00:34:36.411 sys 10m13.187s 00:34:36.411 14:24:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.411 14:24:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.411 ************************************ 00:34:36.411 END TEST nvmf_tcp 00:34:36.411 ************************************ 00:34:36.411 14:24:42 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:36.411 14:24:42 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:36.411 14:24:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:36.411 14:24:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.411 14:24:42 -- common/autotest_common.sh@10 -- # set +x 00:34:36.411 ************************************ 00:34:36.411 START TEST spdkcli_nvmf_tcp 00:34:36.411 ************************************ 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:36.411 * Looking for test storage... 00:34:36.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.411 --rc genhtml_branch_coverage=1 00:34:36.411 --rc genhtml_function_coverage=1 00:34:36.411 --rc genhtml_legend=1 00:34:36.411 --rc geninfo_all_blocks=1 00:34:36.411 --rc geninfo_unexecuted_blocks=1 00:34:36.411 00:34:36.411 ' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.411 --rc genhtml_branch_coverage=1 00:34:36.411 --rc genhtml_function_coverage=1 00:34:36.411 --rc genhtml_legend=1 00:34:36.411 --rc geninfo_all_blocks=1 00:34:36.411 --rc geninfo_unexecuted_blocks=1 00:34:36.411 00:34:36.411 ' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.411 --rc genhtml_branch_coverage=1 00:34:36.411 --rc genhtml_function_coverage=1 00:34:36.411 --rc genhtml_legend=1 00:34:36.411 --rc geninfo_all_blocks=1 00:34:36.411 --rc geninfo_unexecuted_blocks=1 00:34:36.411 00:34:36.411 ' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:36.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.411 --rc genhtml_branch_coverage=1 00:34:36.411 --rc genhtml_function_coverage=1 00:34:36.411 --rc genhtml_legend=1 00:34:36.411 --rc geninfo_all_blocks=1 00:34:36.411 --rc geninfo_unexecuted_blocks=1 00:34:36.411 00:34:36.411 ' 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:36.411 14:24:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:36.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3014122 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3014122 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3014122 ']' 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.412 14:24:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:36.412 [2024-12-05 14:24:42.542736] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:34:36.412 [2024-12-05 14:24:42.542790] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014122 ] 00:34:36.412 [2024-12-05 14:24:42.631114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:36.412 [2024-12-05 14:24:42.668737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.412 [2024-12-05 14:24:42.668749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:37.353 14:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:37.354 14:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:37.354 14:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:37.354 14:24:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:37.354 14:24:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:37.354 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:37.354 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:37.354 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:37.354 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:37.354 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:37.354 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:37.354 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:37.354 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:37.354 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:37.354 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:37.354 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:37.354 ' 00:34:39.888 [2024-12-05 14:24:46.097563] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.274 [2024-12-05 14:24:47.461777] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:43.816 [2024-12-05 14:24:49.980739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:46.359 [2024-12-05 14:24:52.207233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:47.746 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:47.746 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:47.746 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:47.746 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:47.746 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:47.746 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:47.746 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:47.746 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:47.746 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:47.746 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:47.746 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:47.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:47.747 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:47.747 14:24:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:47.747 14:24:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:47.747 14:24:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.747 14:24:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:47.747 14:24:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:47.747 14:24:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:47.747 14:24:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:47.747 14:24:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.318 14:24:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:48.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:48.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:48.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:48.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:48.318 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:48.318 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:48.318 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:48.318 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:48.318 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:48.318 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:48.318 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:48.318 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:48.318 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:48.318 ' 00:34:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:54.896 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:54.896 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:54.896 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:54.896 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:54.896 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:54.896 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:54.896 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:54.896 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:54.896 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3014122 ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3014122' 00:34:54.896 killing process with pid 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3014122 ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3014122 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3014122 ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3014122 00:34:54.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3014122) - No such process 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3014122 is not found' 00:34:54.896 Process with pid 3014122 is not found 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:54.896 00:34:54.896 real 0m18.129s 00:34:54.896 user 0m40.320s 00:34:54.896 sys 0m0.833s 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.896 14:25:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.896 ************************************ 00:34:54.896 END TEST spdkcli_nvmf_tcp 00:34:54.896 ************************************ 00:34:54.896 14:25:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:54.896 14:25:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:54.896 14:25:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.896 14:25:00 -- common/autotest_common.sh@10 -- # set +x 00:34:54.896 ************************************ 00:34:54.896 START TEST nvmf_identify_passthru 00:34:54.896 ************************************ 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:54.896 * Looking for test storage... 00:34:54.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.896 14:25:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.896 --rc genhtml_branch_coverage=1 00:34:54.896 --rc genhtml_function_coverage=1 00:34:54.896 --rc genhtml_legend=1 00:34:54.896 --rc geninfo_all_blocks=1 00:34:54.896 --rc geninfo_unexecuted_blocks=1 00:34:54.896 00:34:54.896 ' 00:34:54.896 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:54.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.896 --rc genhtml_branch_coverage=1 00:34:54.896 --rc genhtml_function_coverage=1 00:34:54.896 --rc genhtml_legend=1 00:34:54.897 --rc geninfo_all_blocks=1 00:34:54.897 --rc geninfo_unexecuted_blocks=1 00:34:54.897 00:34:54.897 ' 00:34:54.897 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:54.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.897 --rc genhtml_branch_coverage=1 00:34:54.897 --rc genhtml_function_coverage=1 00:34:54.897 --rc genhtml_legend=1 00:34:54.897 --rc geninfo_all_blocks=1 00:34:54.897 --rc geninfo_unexecuted_blocks=1 00:34:54.897 00:34:54.897 ' 00:34:54.897 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:54.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.897 --rc genhtml_branch_coverage=1 00:34:54.897 --rc genhtml_function_coverage=1 00:34:54.897 --rc genhtml_legend=1 00:34:54.897 --rc geninfo_all_blocks=1 00:34:54.897 --rc geninfo_unexecuted_blocks=1 00:34:54.897 00:34:54.897 ' 00:34:54.897 14:25:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:54.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:54.897 14:25:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.897 14:25:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:54.897 14:25:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.897 14:25:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.897 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:54.897 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:54.897 14:25:00 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:54.897 14:25:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:03.036 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:03.036 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:03.036 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:03.037 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:03.037 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:03.037 14:25:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:03.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:03.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:35:03.037 00:35:03.037 --- 10.0.0.2 ping statistics --- 00:35:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.037 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:03.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:03.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:35:03.037 00:35:03.037 --- 10.0.0.1 ping statistics --- 00:35:03.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:03.037 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:03.037 14:25:08 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:35:03.037 14:25:08 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:03.037 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:03.038 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:03.038 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:03.038 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:03.038 14:25:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3021532 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:03.298 14:25:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3021532 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3021532 ']' 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.298 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:03.299 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.299 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:03.299 14:25:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:03.299 [2024-12-05 14:25:09.448038] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:35:03.299 [2024-12-05 14:25:09.448093] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:03.299 [2024-12-05 14:25:09.541261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:03.299 [2024-12-05 14:25:09.582976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:03.299 [2024-12-05 14:25:09.583015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:03.299 [2024-12-05 14:25:09.583023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:03.299 [2024-12-05 14:25:09.583029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:03.299 [2024-12-05 14:25:09.583035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:03.299 [2024-12-05 14:25:09.585010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.299 [2024-12-05 14:25:09.585163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:03.299 [2024-12-05 14:25:09.585320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.299 [2024-12-05 14:25:09.585322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:04.241 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.241 INFO: Log level set to 20 00:35:04.241 INFO: Requests: 00:35:04.241 { 00:35:04.241 "jsonrpc": "2.0", 00:35:04.241 "method": "nvmf_set_config", 00:35:04.241 "id": 1, 00:35:04.241 "params": { 00:35:04.241 "admin_cmd_passthru": { 00:35:04.241 "identify_ctrlr": true 00:35:04.241 } 00:35:04.241 } 00:35:04.241 } 00:35:04.241 00:35:04.241 INFO: response: 00:35:04.241 { 00:35:04.241 "jsonrpc": "2.0", 00:35:04.241 "id": 1, 00:35:04.241 "result": true 00:35:04.241 } 00:35:04.241 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.241 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.241 INFO: Setting log level to 20 00:35:04.241 INFO: Setting log level to 20 00:35:04.241 INFO: Log level set to 20 00:35:04.241 INFO: Log level set to 20 00:35:04.241 INFO: Requests: 00:35:04.241 { 00:35:04.241 "jsonrpc": "2.0", 00:35:04.241 "method": "framework_start_init", 00:35:04.241 "id": 1 00:35:04.241 } 00:35:04.241 00:35:04.241 INFO: Requests: 00:35:04.241 { 00:35:04.241 "jsonrpc": "2.0", 00:35:04.241 "method": "framework_start_init", 00:35:04.241 "id": 1 00:35:04.241 } 00:35:04.241 00:35:04.241 [2024-12-05 14:25:10.349609] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:04.241 INFO: response: 00:35:04.241 { 00:35:04.241 "jsonrpc": "2.0", 00:35:04.241 "id": 1, 00:35:04.241 "result": true 00:35:04.241 } 00:35:04.241 00:35:04.241 INFO: response: 00:35:04.241 { 00:35:04.241 "jsonrpc": "2.0", 00:35:04.241 "id": 1, 00:35:04.241 "result": true 00:35:04.241 } 00:35:04.241 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.241 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.241 INFO: Setting log level to 40 00:35:04.241 INFO: Setting log level to 40 00:35:04.241 INFO: Setting log level to 40 00:35:04.241 [2024-12-05 14:25:10.363165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.241 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.241 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.241 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.501 Nvme0n1 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.502 [2024-12-05 14:25:10.759552] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:04.502 [ 00:35:04.502 { 00:35:04.502 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:04.502 "subtype": "Discovery", 00:35:04.502 "listen_addresses": [], 00:35:04.502 "allow_any_host": true, 00:35:04.502 "hosts": [] 00:35:04.502 }, 00:35:04.502 { 00:35:04.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:04.502 "subtype": "NVMe", 00:35:04.502 "listen_addresses": [ 00:35:04.502 { 00:35:04.502 "trtype": "TCP", 00:35:04.502 "adrfam": "IPv4", 00:35:04.502 "traddr": "10.0.0.2", 00:35:04.502 "trsvcid": "4420" 00:35:04.502 } 00:35:04.502 ], 00:35:04.502 "allow_any_host": true, 00:35:04.502 "hosts": [], 00:35:04.502 "serial_number": "SPDK00000000000001", 00:35:04.502 "model_number": "SPDK bdev Controller", 00:35:04.502 "max_namespaces": 1, 00:35:04.502 "min_cntlid": 1, 00:35:04.502 "max_cntlid": 65519, 00:35:04.502 "namespaces": [ 00:35:04.502 { 00:35:04.502 "nsid": 1, 00:35:04.502 "bdev_name": "Nvme0n1", 00:35:04.502 "name": "Nvme0n1", 00:35:04.502 "nguid": "36344730526054870025384500000044", 00:35:04.502 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:04.502 } 00:35:04.502 ] 00:35:04.502 } 00:35:04.502 ] 00:35:04.502 14:25:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:04.502 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:04.760 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:04.760 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:04.760 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:04.760 14:25:10 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:05.019 14:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:05.019 14:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:05.019 14:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:05.019 14:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:05.019 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.019 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.019 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.019 14:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:05.019 14:25:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:05.019 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:05.019 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:05.019 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:05.019 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:05.019 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:05.019 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:05.019 rmmod nvme_tcp 00:35:05.019 rmmod nvme_fabrics 00:35:05.019 rmmod nvme_keyring 00:35:05.277 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:05.277 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:05.277 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:05.277 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3021532 ']' 00:35:05.277 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3021532 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3021532 ']' 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3021532 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3021532 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3021532' 00:35:05.277 killing process with pid 3021532 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3021532 00:35:05.277 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3021532 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:05.537 14:25:11 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:05.537 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:05.537 14:25:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.446 14:25:13 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:07.446 00:35:07.446 real 0m13.272s 00:35:07.446 user 0m10.528s 00:35:07.446 sys 0m6.748s 00:35:07.446 14:25:13 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.446 14:25:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:07.446 ************************************ 00:35:07.446 END TEST nvmf_identify_passthru 00:35:07.446 ************************************ 00:35:07.706 14:25:13 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:07.706 14:25:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:07.706 14:25:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.706 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:35:07.706 ************************************ 00:35:07.706 START TEST nvmf_dif 00:35:07.706 ************************************ 00:35:07.706 14:25:13 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:07.706 * Looking for test storage... 00:35:07.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.706 14:25:13 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:07.706 14:25:13 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:35:07.706 14:25:13 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:07.706 14:25:13 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.706 14:25:13 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:07.706 14:25:14 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.706 14:25:14 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.706 14:25:14 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.706 14:25:14 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:07.706 14:25:14 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.706 14:25:14 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.967 --rc genhtml_branch_coverage=1 00:35:07.967 --rc genhtml_function_coverage=1 00:35:07.967 --rc genhtml_legend=1 00:35:07.967 --rc geninfo_all_blocks=1 00:35:07.967 --rc geninfo_unexecuted_blocks=1 00:35:07.967 00:35:07.967 ' 00:35:07.967 14:25:14 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.967 --rc genhtml_branch_coverage=1 00:35:07.967 --rc genhtml_function_coverage=1 00:35:07.967 --rc genhtml_legend=1 00:35:07.967 --rc geninfo_all_blocks=1 00:35:07.967 --rc geninfo_unexecuted_blocks=1 00:35:07.967 00:35:07.967 ' 00:35:07.967 14:25:14 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.967 --rc genhtml_branch_coverage=1 00:35:07.967 --rc genhtml_function_coverage=1 00:35:07.967 --rc genhtml_legend=1 00:35:07.967 --rc geninfo_all_blocks=1 00:35:07.967 --rc geninfo_unexecuted_blocks=1 00:35:07.967 00:35:07.967 ' 00:35:07.967 14:25:14 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:07.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.967 --rc genhtml_branch_coverage=1 00:35:07.967 --rc genhtml_function_coverage=1 00:35:07.967 --rc genhtml_legend=1 00:35:07.967 --rc geninfo_all_blocks=1 00:35:07.967 --rc geninfo_unexecuted_blocks=1 00:35:07.967 00:35:07.967 ' 00:35:07.967 14:25:14 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.967 14:25:14 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.967 14:25:14 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.967 14:25:14 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.967 14:25:14 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.967 14:25:14 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.967 14:25:14 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.967 14:25:14 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.967 14:25:14 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:07.967 14:25:14 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:07.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:07.967 14:25:14 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:07.967 14:25:14 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:07.967 14:25:14 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:07.967 14:25:14 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:07.967 14:25:14 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:07.967 14:25:14 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:07.968 14:25:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:07.968 14:25:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:07.968 14:25:14 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:07.968 14:25:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:16.104 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:16.104 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:16.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:16.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:16.104 14:25:20 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:16.105 14:25:20 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:16.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:16.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:35:16.105 00:35:16.105 --- 10.0.0.2 ping statistics --- 00:35:16.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.105 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:16.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:16.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:35:16.105 00:35:16.105 --- 10.0.0.1 ping statistics --- 00:35:16.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:16.105 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:16.105 14:25:21 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:18.719 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:18.719 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:18.719 14:25:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:18.719 14:25:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3027522 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3027522 00:35:18.719 14:25:24 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3027522 ']' 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:18.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:18.719 14:25:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:18.719 [2024-12-05 14:25:24.729471] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:35:18.719 [2024-12-05 14:25:24.729533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:18.719 [2024-12-05 14:25:24.828666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.719 [2024-12-05 14:25:24.865928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:18.719 [2024-12-05 14:25:24.865960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:18.719 [2024-12-05 14:25:24.865968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:18.719 [2024-12-05 14:25:24.865975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:18.719 [2024-12-05 14:25:24.865981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:18.719 [2024-12-05 14:25:24.866554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:19.286 14:25:25 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.286 14:25:25 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.286 14:25:25 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:19.286 14:25:25 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.286 [2024-12-05 14:25:25.568494] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.286 14:25:25 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.286 14:25:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:19.545 ************************************ 00:35:19.546 START TEST fio_dif_1_default 00:35:19.546 ************************************ 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.546 bdev_null0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:19.546 [2024-12-05 14:25:25.656850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:19.546 { 00:35:19.546 "params": { 00:35:19.546 "name": "Nvme$subsystem", 00:35:19.546 "trtype": "$TEST_TRANSPORT", 00:35:19.546 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:19.546 "adrfam": "ipv4", 00:35:19.546 "trsvcid": "$NVMF_PORT", 00:35:19.546 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:19.546 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:19.546 "hdgst": ${hdgst:-false}, 00:35:19.546 "ddgst": ${ddgst:-false} 00:35:19.546 }, 00:35:19.546 "method": "bdev_nvme_attach_controller" 00:35:19.546 } 00:35:19.546 EOF 00:35:19.546 )") 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:19.546 "params": { 00:35:19.546 "name": "Nvme0", 00:35:19.546 "trtype": "tcp", 00:35:19.546 "traddr": "10.0.0.2", 00:35:19.546 "adrfam": "ipv4", 00:35:19.546 "trsvcid": "4420", 00:35:19.546 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.546 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.546 "hdgst": false, 00:35:19.546 "ddgst": false 00:35:19.546 }, 00:35:19.546 "method": "bdev_nvme_attach_controller" 00:35:19.546 }' 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:19.546 14:25:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:20.114 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:20.114 fio-3.35 00:35:20.114 Starting 1 thread 00:35:32.345 00:35:32.345 filename0: (groupid=0, jobs=1): err= 0: pid=3028056: Thu Dec 5 14:25:36 2024 00:35:32.345 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:35:32.345 slat (nsec): min=5528, max=35435, avg=6335.36, stdev=1649.67 00:35:32.345 clat (usec): min=40866, max=43195, avg=41045.01, stdev=260.23 00:35:32.345 lat (usec): min=40872, max=43230, avg=41051.34, stdev=261.03 00:35:32.345 clat percentiles (usec): 00:35:32.345 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:32.345 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:32.345 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:32.345 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:32.345 | 99.99th=[43254] 00:35:32.345 bw ( KiB/s): min= 352, max= 416, per=99.58%, avg=388.80, stdev=15.66, samples=20 00:35:32.345 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:35:32.345 lat (msec) : 50=100.00% 00:35:32.345 cpu : usr=93.58%, sys=6.20%, ctx=11, majf=0, minf=228 00:35:32.345 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.345 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.345 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.345 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:32.345 00:35:32.345 Run status group 0 (all jobs): 00:35:32.345 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10020-10020msec 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.345 00:35:32.345 real 0m11.187s 00:35:32.345 user 0m26.829s 00:35:32.345 sys 0m0.986s 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 ************************************ 00:35:32.345 END TEST fio_dif_1_default 00:35:32.345 ************************************ 00:35:32.345 14:25:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:32.345 14:25:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:32.345 14:25:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 ************************************ 00:35:32.345 START TEST fio_dif_1_multi_subsystems 00:35:32.345 ************************************ 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 bdev_null0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 [2024-12-05 14:25:36.922421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.345 bdev_null1 00:35:32.345 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:32.346 { 00:35:32.346 "params": { 00:35:32.346 "name": "Nvme$subsystem", 00:35:32.346 "trtype": "$TEST_TRANSPORT", 00:35:32.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.346 "adrfam": "ipv4", 00:35:32.346 "trsvcid": "$NVMF_PORT", 00:35:32.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.346 "hdgst": ${hdgst:-false}, 00:35:32.346 "ddgst": ${ddgst:-false} 00:35:32.346 }, 00:35:32.346 "method": "bdev_nvme_attach_controller" 00:35:32.346 } 00:35:32.346 EOF 00:35:32.346 )") 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:32.346 { 00:35:32.346 "params": { 00:35:32.346 "name": "Nvme$subsystem", 00:35:32.346 "trtype": "$TEST_TRANSPORT", 00:35:32.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.346 "adrfam": "ipv4", 00:35:32.346 "trsvcid": "$NVMF_PORT", 00:35:32.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.346 "hdgst": ${hdgst:-false}, 00:35:32.346 "ddgst": ${ddgst:-false} 00:35:32.346 }, 00:35:32.346 "method": "bdev_nvme_attach_controller" 00:35:32.346 } 00:35:32.346 EOF 00:35:32.346 )") 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:32.346 14:25:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:32.346 "params": { 00:35:32.346 "name": "Nvme0", 00:35:32.346 "trtype": "tcp", 00:35:32.346 "traddr": "10.0.0.2", 00:35:32.346 "adrfam": "ipv4", 00:35:32.346 "trsvcid": "4420", 00:35:32.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.346 "hdgst": false, 00:35:32.346 "ddgst": false 00:35:32.346 }, 00:35:32.346 "method": "bdev_nvme_attach_controller" 00:35:32.346 },{ 00:35:32.346 "params": { 00:35:32.346 "name": "Nvme1", 00:35:32.346 "trtype": "tcp", 00:35:32.346 "traddr": "10.0.0.2", 00:35:32.346 "adrfam": "ipv4", 00:35:32.346 "trsvcid": "4420", 00:35:32.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:32.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:32.346 "hdgst": false, 00:35:32.346 "ddgst": false 00:35:32.346 }, 00:35:32.346 "method": "bdev_nvme_attach_controller" 00:35:32.346 }' 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.346 14:25:37 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.346 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:32.346 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:32.346 fio-3.35 00:35:32.346 Starting 2 threads 00:35:42.353 00:35:42.353 filename0: (groupid=0, jobs=1): err= 0: pid=3030456: Thu Dec 5 14:25:48 2024 00:35:42.353 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:35:42.353 slat (nsec): min=5543, max=33497, avg=7076.87, stdev=2834.42 00:35:42.353 clat (usec): min=40836, max=42474, avg=40997.92, stdev=145.03 00:35:42.353 lat (usec): min=40841, max=42508, avg=41004.99, stdev=146.82 00:35:42.353 clat percentiles (usec): 00:35:42.353 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:42.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:42.353 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:42.353 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:42.353 | 99.99th=[42730] 00:35:42.353 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:35:42.353 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:42.353 lat (msec) : 50=100.00% 00:35:42.353 cpu : usr=95.56%, sys=4.21%, ctx=20, majf=0, minf=161 00:35:42.353 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.353 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.353 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.353 filename1: (groupid=0, jobs=1): err= 0: pid=3030457: Thu Dec 5 14:25:48 2024 00:35:42.353 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:35:42.353 slat (nsec): min=5544, max=40810, avg=6838.16, stdev=2796.08 00:35:42.353 clat (usec): min=40807, max=42569, avg=41003.84, stdev=159.95 00:35:42.353 lat (usec): min=40815, max=42602, avg=41010.68, stdev=161.42 00:35:42.353 clat percentiles (usec): 00:35:42.353 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:42.353 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:42.353 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:42.353 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:42.353 | 99.99th=[42730] 00:35:42.353 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:35:42.353 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:35:42.353 lat (msec) : 50=100.00% 00:35:42.353 cpu : usr=95.25%, sys=4.51%, ctx=14, majf=0, minf=114 00:35:42.353 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.353 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.353 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:42.353 00:35:42.353 Run status group 0 (all jobs): 00:35:42.353 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10009-10010msec 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.353 00:35:42.353 real 0m11.528s 00:35:42.353 user 0m32.315s 00:35:42.353 sys 0m1.276s 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:42.353 14:25:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:42.353 ************************************ 00:35:42.353 END TEST fio_dif_1_multi_subsystems 00:35:42.353 ************************************ 00:35:42.353 14:25:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:42.353 14:25:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.353 14:25:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.353 14:25:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.353 ************************************ 00:35:42.353 START TEST fio_dif_rand_params 00:35:42.353 ************************************ 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:42.353 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.354 bdev_null0 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.354 [2024-12-05 14:25:48.535770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.354 { 00:35:42.354 "params": { 00:35:42.354 "name": "Nvme$subsystem", 00:35:42.354 "trtype": "$TEST_TRANSPORT", 00:35:42.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.354 "adrfam": "ipv4", 00:35:42.354 "trsvcid": "$NVMF_PORT", 00:35:42.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.354 "hdgst": ${hdgst:-false}, 00:35:42.354 "ddgst": ${ddgst:-false} 00:35:42.354 }, 00:35:42.354 "method": "bdev_nvme_attach_controller" 00:35:42.354 } 00:35:42.354 EOF 00:35:42.354 )") 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.354 "params": { 00:35:42.354 "name": "Nvme0", 00:35:42.354 "trtype": "tcp", 00:35:42.354 "traddr": "10.0.0.2", 00:35:42.354 "adrfam": "ipv4", 00:35:42.354 "trsvcid": "4420", 00:35:42.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.354 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.354 "hdgst": false, 00:35:42.354 "ddgst": false 00:35:42.354 }, 00:35:42.354 "method": "bdev_nvme_attach_controller" 00:35:42.354 }' 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.354 14:25:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.958 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:42.958 ... 00:35:42.958 fio-3.35 00:35:42.958 Starting 3 threads 00:35:48.246 00:35:48.246 filename0: (groupid=0, jobs=1): err= 0: pid=3032650: Thu Dec 5 14:25:54 2024 00:35:48.246 read: IOPS=350, BW=43.8MiB/s (46.0MB/s)(220MiB/5028msec) 00:35:48.246 slat (nsec): min=5575, max=66663, avg=8356.44, stdev=2234.84 00:35:48.246 clat (usec): min=3953, max=90546, avg=8544.00, stdev=7312.06 00:35:48.246 lat (usec): min=3959, max=90552, avg=8552.36, stdev=7312.13 00:35:48.246 clat percentiles (usec): 00:35:48.246 | 1.00th=[ 4621], 5.00th=[ 4883], 10.00th=[ 5276], 20.00th=[ 6325], 00:35:48.246 | 30.00th=[ 6521], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 7898], 00:35:48.246 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[10159], 95.00th=[11076], 00:35:48.246 | 99.00th=[46924], 99.50th=[47973], 99.90th=[89654], 99.95th=[90702], 00:35:48.246 | 99.99th=[90702] 00:35:48.246 bw ( KiB/s): min=36096, max=52480, per=48.10%, avg=45056.00, stdev=6916.74, samples=10 00:35:48.246 iops : min= 282, max= 410, avg=352.00, stdev=54.04, samples=10 00:35:48.246 lat (msec) : 4=0.06%, 10=88.71%, 20=8.79%, 50=2.10%, 100=0.34% 00:35:48.246 cpu : usr=94.33%, sys=5.41%, ctx=7, majf=0, minf=184 00:35:48.246 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.246 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.246 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.246 issued rwts: total=1763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.246 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:48.246 filename0: (groupid=0, jobs=1): err= 0: pid=3032651: Thu Dec 5 14:25:54 2024 00:35:48.246 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(86.0MiB/5040msec) 00:35:48.246 slat (nsec): min=5713, max=35330, avg=8433.45, stdev=1524.48 00:35:48.246 clat (msec): min=4, max=131, avg=21.96, stdev=24.47 00:35:48.246 lat (msec): min=4, max=131, avg=21.97, stdev=24.47 00:35:48.246 clat percentiles (msec): 00:35:48.246 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:35:48.246 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:35:48.246 | 70.00th=[ 11], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 89], 00:35:48.246 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 132], 99.95th=[ 132], 00:35:48.246 | 99.99th=[ 132] 00:35:48.246 bw ( KiB/s): min=11008, max=24320, per=18.72%, avg=17536.00, stdev=5131.72, samples=10 00:35:48.246 iops : min= 86, max= 190, avg=137.00, stdev=40.09, samples=10 00:35:48.246 lat (msec) : 10=65.70%, 20=6.10%, 50=18.31%, 100=9.59%, 250=0.29% 00:35:48.247 cpu : usr=96.17%, sys=3.57%, ctx=9, majf=0, minf=85 00:35:48.247 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.247 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.247 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:48.247 filename0: (groupid=0, jobs=1): err= 0: pid=3032652: Thu Dec 5 14:25:54 2024 00:35:48.247 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(155MiB/5007msec) 00:35:48.247 slat (nsec): min=5544, max=31800, avg=8166.05, stdev=1561.40 00:35:48.247 clat (usec): min=3649, max=89223, avg=12131.89, stdev=15811.45 00:35:48.247 lat (usec): min=3658, max=89231, avg=12140.06, stdev=15811.55 00:35:48.247 clat percentiles (usec): 00:35:48.247 | 1.00th=[ 4015], 5.00th=[ 4359], 10.00th=[ 4686], 20.00th=[ 5145], 00:35:48.247 | 30.00th=[ 5407], 40.00th=[ 5735], 50.00th=[ 6063], 60.00th=[ 6587], 00:35:48.247 | 70.00th=[ 7111], 80.00th=[ 7832], 90.00th=[46924], 95.00th=[47973], 00:35:48.247 | 99.00th=[50070], 99.50th=[87557], 99.90th=[88605], 99.95th=[89654], 00:35:48.247 | 99.99th=[89654] 00:35:48.247 bw ( KiB/s): min=16384, max=51200, per=33.73%, avg=31590.40, stdev=12784.18, samples=10 00:35:48.247 iops : min= 128, max= 400, avg=246.80, stdev=99.88, samples=10 00:35:48.247 lat (msec) : 4=0.97%, 10=85.21%, 50=12.69%, 100=1.13% 00:35:48.247 cpu : usr=95.63%, sys=4.14%, ctx=9, majf=0, minf=83 00:35:48.247 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.247 issued rwts: total=1237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.247 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:48.247 00:35:48.247 Run status group 0 (all jobs): 00:35:48.247 READ: bw=91.5MiB/s (95.9MB/s), 17.1MiB/s-43.8MiB/s (17.9MB/s-46.0MB/s), io=461MiB (483MB), run=5007-5040msec 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.507 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 bdev_null0 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 [2024-12-05 14:25:54.747878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 bdev_null1 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.508 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.768 bdev_null2 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:48.768 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.769 { 00:35:48.769 "params": { 00:35:48.769 "name": "Nvme$subsystem", 00:35:48.769 "trtype": "$TEST_TRANSPORT", 00:35:48.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.769 "adrfam": "ipv4", 00:35:48.769 "trsvcid": "$NVMF_PORT", 00:35:48.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.769 "hdgst": ${hdgst:-false}, 00:35:48.769 "ddgst": ${ddgst:-false} 00:35:48.769 }, 00:35:48.769 "method": "bdev_nvme_attach_controller" 00:35:48.769 } 00:35:48.769 EOF 00:35:48.769 )") 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.769 { 00:35:48.769 "params": { 00:35:48.769 "name": "Nvme$subsystem", 00:35:48.769 "trtype": "$TEST_TRANSPORT", 00:35:48.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.769 "adrfam": "ipv4", 00:35:48.769 "trsvcid": "$NVMF_PORT", 00:35:48.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.769 "hdgst": ${hdgst:-false}, 00:35:48.769 "ddgst": ${ddgst:-false} 00:35:48.769 }, 00:35:48.769 "method": "bdev_nvme_attach_controller" 00:35:48.769 } 00:35:48.769 EOF 00:35:48.769 )") 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.769 { 00:35:48.769 "params": { 00:35:48.769 "name": "Nvme$subsystem", 00:35:48.769 "trtype": "$TEST_TRANSPORT", 00:35:48.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.769 "adrfam": "ipv4", 00:35:48.769 "trsvcid": "$NVMF_PORT", 00:35:48.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.769 "hdgst": ${hdgst:-false}, 00:35:48.769 "ddgst": ${ddgst:-false} 00:35:48.769 }, 00:35:48.769 "method": "bdev_nvme_attach_controller" 00:35:48.769 } 00:35:48.769 EOF 00:35:48.769 )") 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:48.769 "params": { 00:35:48.769 "name": "Nvme0", 00:35:48.769 "trtype": "tcp", 00:35:48.769 "traddr": "10.0.0.2", 00:35:48.769 "adrfam": "ipv4", 00:35:48.769 "trsvcid": "4420", 00:35:48.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.769 "hdgst": false, 00:35:48.769 "ddgst": false 00:35:48.769 }, 00:35:48.769 "method": "bdev_nvme_attach_controller" 00:35:48.769 },{ 00:35:48.769 "params": { 00:35:48.769 "name": "Nvme1", 00:35:48.769 "trtype": "tcp", 00:35:48.769 "traddr": "10.0.0.2", 00:35:48.769 "adrfam": "ipv4", 00:35:48.769 "trsvcid": "4420", 00:35:48.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:48.769 "hdgst": false, 00:35:48.769 "ddgst": false 00:35:48.769 }, 00:35:48.769 "method": "bdev_nvme_attach_controller" 00:35:48.769 },{ 00:35:48.769 "params": { 00:35:48.769 "name": "Nvme2", 00:35:48.769 "trtype": "tcp", 00:35:48.769 "traddr": "10.0.0.2", 00:35:48.769 "adrfam": "ipv4", 00:35:48.769 "trsvcid": "4420", 00:35:48.769 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:48.769 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:48.769 "hdgst": false, 00:35:48.769 "ddgst": false 00:35:48.769 }, 00:35:48.769 "method": "bdev_nvme_attach_controller" 00:35:48.769 }' 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.769 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:48.770 14:25:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:49.031 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:49.031 ... 00:35:49.031 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:49.031 ... 00:35:49.031 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:49.031 ... 00:35:49.031 fio-3.35 00:35:49.031 Starting 24 threads 00:36:01.295 00:36:01.295 filename0: (groupid=0, jobs=1): err= 0: pid=3034161: Thu Dec 5 14:26:06 2024 00:36:01.295 read: IOPS=669, BW=2676KiB/s (2740kB/s)(26.2MiB/10011msec) 00:36:01.295 slat (usec): min=5, max=103, avg=18.45, stdev=14.85 00:36:01.295 clat (usec): min=9611, max=34147, avg=23764.65, stdev=1742.10 00:36:01.295 lat (usec): min=9647, max=34154, avg=23783.10, stdev=1740.88 00:36:01.295 clat percentiles (usec): 00:36:01.295 | 1.00th=[14353], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:36:01.295 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.295 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:36:01.295 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31851], 99.95th=[34341], 00:36:01.295 | 99.99th=[34341] 00:36:01.295 bw ( KiB/s): min= 2560, max= 2816, per=4.19%, avg=2678.74, stdev=52.61, samples=19 00:36:01.295 iops : min= 640, max= 704, avg=669.68, stdev=13.15, samples=19 00:36:01.295 lat (msec) : 10=0.10%, 20=2.54%, 50=97.36% 00:36:01.295 cpu : usr=98.91%, sys=0.81%, ctx=33, majf=0, minf=9 00:36:01.295 IO depths : 1=6.0%, 2=12.1%, 4=24.6%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:01.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 issued rwts: total=6698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.295 filename0: (groupid=0, jobs=1): err= 0: pid=3034162: Thu Dec 5 14:26:06 2024 00:36:01.295 read: IOPS=687, BW=2748KiB/s (2814kB/s)(26.9MiB/10011msec) 00:36:01.295 slat (nsec): min=5706, max=87348, avg=14548.37, stdev=11783.19 00:36:01.295 clat (usec): min=8658, max=45050, avg=23186.35, stdev=3698.81 00:36:01.295 lat (usec): min=8677, max=45076, avg=23200.90, stdev=3700.77 00:36:01.295 clat percentiles (usec): 00:36:01.295 | 1.00th=[11076], 5.00th=[16581], 10.00th=[17957], 20.00th=[22414], 00:36:01.295 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.295 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24773], 95.00th=[27657], 00:36:01.295 | 99.00th=[34341], 99.50th=[36963], 99.90th=[43779], 99.95th=[44827], 00:36:01.295 | 99.99th=[44827] 00:36:01.295 bw ( KiB/s): min= 2576, max= 3024, per=4.30%, avg=2743.58, stdev=113.68, samples=19 00:36:01.295 iops : min= 644, max= 756, avg=685.89, stdev=28.42, samples=19 00:36:01.295 lat (msec) : 10=0.39%, 20=12.36%, 50=87.25% 00:36:01.295 cpu : usr=98.45%, sys=1.17%, ctx=168, majf=0, minf=9 00:36:01.295 IO depths : 1=2.9%, 2=5.8%, 4=13.8%, 8=66.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:01.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 complete : 0=0.0%, 4=91.2%, 8=4.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 issued rwts: total=6878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.295 filename0: (groupid=0, jobs=1): err= 0: pid=3034163: Thu Dec 5 14:26:06 2024 00:36:01.295 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10001msec) 00:36:01.295 slat (nsec): min=5726, max=72492, avg=17275.06, stdev=11017.26 00:36:01.295 clat (usec): min=11548, max=35531, avg=23872.34, stdev=956.09 00:36:01.295 lat (usec): min=11555, max=35554, avg=23889.62, stdev=956.07 00:36:01.295 clat percentiles (usec): 00:36:01.295 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.295 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:01.295 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.295 | 99.00th=[24773], 99.50th=[25035], 99.90th=[35390], 99.95th=[35390], 00:36:01.295 | 99.99th=[35390] 00:36:01.295 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2654.26, stdev=57.29, samples=19 00:36:01.295 iops : min= 640, max= 672, avg=663.53, stdev=14.33, samples=19 00:36:01.295 lat (msec) : 20=0.48%, 50=99.52% 00:36:01.295 cpu : usr=98.76%, sys=0.94%, ctx=62, majf=0, minf=9 00:36:01.295 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:01.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.295 filename0: (groupid=0, jobs=1): err= 0: pid=3034164: Thu Dec 5 14:26:06 2024 00:36:01.295 read: IOPS=666, BW=2664KiB/s (2728kB/s)(26.0MiB/10004msec) 00:36:01.295 slat (nsec): min=5842, max=91035, avg=27804.19, stdev=14434.87 00:36:01.295 clat (usec): min=3552, max=46476, avg=23767.47, stdev=1647.90 00:36:01.295 lat (usec): min=3558, max=46493, avg=23795.27, stdev=1648.13 00:36:01.295 clat percentiles (usec): 00:36:01.295 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.295 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:01.295 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:01.295 | 99.00th=[24773], 99.50th=[25035], 99.90th=[46400], 99.95th=[46400], 00:36:01.295 | 99.99th=[46400] 00:36:01.295 bw ( KiB/s): min= 2436, max= 2693, per=4.16%, avg=2654.47, stdev=71.24, samples=19 00:36:01.295 iops : min= 609, max= 673, avg=663.58, stdev=17.79, samples=19 00:36:01.295 lat (msec) : 4=0.11%, 10=0.24%, 20=0.48%, 50=99.17% 00:36:01.295 cpu : usr=98.48%, sys=1.09%, ctx=90, majf=0, minf=9 00:36:01.295 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:01.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.295 issued rwts: total=6663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.296 filename0: (groupid=0, jobs=1): err= 0: pid=3034165: Thu Dec 5 14:26:06 2024 00:36:01.296 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10008msec) 00:36:01.296 slat (nsec): min=5773, max=94967, avg=24702.00, stdev=15061.73 00:36:01.296 clat (usec): min=7610, max=32695, avg=23729.47, stdev=1470.46 00:36:01.296 lat (usec): min=7651, max=32707, avg=23754.17, stdev=1469.57 00:36:01.296 clat percentiles (usec): 00:36:01.296 | 1.00th=[15401], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.296 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:01.296 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:01.296 | 99.00th=[24773], 99.50th=[25035], 99.90th=[32637], 99.95th=[32637], 00:36:01.296 | 99.99th=[32637] 00:36:01.296 bw ( KiB/s): min= 2560, max= 2944, per=4.19%, avg=2674.53, stdev=84.20, samples=19 00:36:01.296 iops : min= 640, max= 736, avg=668.63, stdev=21.05, samples=19 00:36:01.296 lat (msec) : 10=0.21%, 20=1.20%, 50=98.59% 00:36:01.296 cpu : usr=98.47%, sys=0.99%, ctx=146, majf=0, minf=9 00:36:01.296 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.296 filename0: (groupid=0, jobs=1): err= 0: pid=3034166: Thu Dec 5 14:26:06 2024 00:36:01.296 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10002msec) 00:36:01.296 slat (nsec): min=5721, max=74175, avg=18654.20, stdev=12505.72 00:36:01.296 clat (usec): min=11743, max=36760, avg=23882.48, stdev=997.99 00:36:01.296 lat (usec): min=11749, max=36776, avg=23901.14, stdev=997.51 00:36:01.296 clat percentiles (usec): 00:36:01.296 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.296 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.296 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.296 | 99.00th=[25035], 99.50th=[25035], 99.90th=[36963], 99.95th=[36963], 00:36:01.296 | 99.99th=[36963] 00:36:01.296 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2654.00, stdev=57.73, samples=19 00:36:01.296 iops : min= 640, max= 672, avg=663.47, stdev=14.42, samples=19 00:36:01.296 lat (msec) : 20=0.48%, 50=99.52% 00:36:01.296 cpu : usr=98.77%, sys=0.85%, ctx=115, majf=0, minf=9 00:36:01.296 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.296 filename0: (groupid=0, jobs=1): err= 0: pid=3034167: Thu Dec 5 14:26:06 2024 00:36:01.296 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10002msec) 00:36:01.296 slat (nsec): min=5756, max=91835, avg=21178.86, stdev=17220.65 00:36:01.296 clat (usec): min=17438, max=27493, avg=23865.00, stdev=526.88 00:36:01.296 lat (usec): min=17447, max=27499, avg=23886.18, stdev=524.38 00:36:01.296 clat percentiles (usec): 00:36:01.296 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.296 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.296 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.296 | 99.00th=[24773], 99.50th=[25035], 99.90th=[26084], 99.95th=[27395], 00:36:01.296 | 99.99th=[27395] 00:36:01.296 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2660.42, stdev=53.31, samples=19 00:36:01.296 iops : min= 640, max= 672, avg=665.05, stdev=13.31, samples=19 00:36:01.296 lat (msec) : 20=0.48%, 50=99.52% 00:36:01.296 cpu : usr=98.93%, sys=0.79%, ctx=23, majf=0, minf=9 00:36:01.296 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.296 filename0: (groupid=0, jobs=1): err= 0: pid=3034168: Thu Dec 5 14:26:06 2024 00:36:01.296 read: IOPS=669, BW=2676KiB/s (2740kB/s)(26.2MiB/10020msec) 00:36:01.296 slat (nsec): min=5711, max=74876, avg=10033.08, stdev=5857.23 00:36:01.296 clat (usec): min=9411, max=32123, avg=23827.85, stdev=1601.63 00:36:01.296 lat (usec): min=9428, max=32141, avg=23837.89, stdev=1600.51 00:36:01.296 clat percentiles (usec): 00:36:01.296 | 1.00th=[15139], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.296 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.296 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:36:01.296 | 99.00th=[25035], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:36:01.296 | 99.99th=[32113] 00:36:01.296 bw ( KiB/s): min= 2560, max= 2944, per=4.19%, avg=2675.20, stdev=82.01, samples=20 00:36:01.296 iops : min= 640, max= 736, avg=668.80, stdev=20.50, samples=20 00:36:01.296 lat (msec) : 10=0.24%, 20=1.55%, 50=98.21% 00:36:01.296 cpu : usr=98.33%, sys=1.10%, ctx=256, majf=0, minf=9 00:36:01.296 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.296 filename1: (groupid=0, jobs=1): err= 0: pid=3034169: Thu Dec 5 14:26:06 2024 00:36:01.296 read: IOPS=665, BW=2664KiB/s (2728kB/s)(26.0MiB/10004msec) 00:36:01.296 slat (usec): min=5, max=102, avg=28.70, stdev=17.11 00:36:01.296 clat (usec): min=4181, max=46447, avg=23737.02, stdev=1635.19 00:36:01.296 lat (usec): min=4187, max=46467, avg=23765.72, stdev=1636.01 00:36:01.296 clat percentiles (usec): 00:36:01.296 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:36:01.296 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:01.296 | 70.00th=[23987], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:01.296 | 99.00th=[24773], 99.50th=[25035], 99.90th=[46400], 99.95th=[46400], 00:36:01.296 | 99.99th=[46400] 00:36:01.296 bw ( KiB/s): min= 2436, max= 2693, per=4.16%, avg=2654.47, stdev=71.24, samples=19 00:36:01.296 iops : min= 609, max= 673, avg=663.58, stdev=17.79, samples=19 00:36:01.296 lat (msec) : 10=0.33%, 20=0.48%, 50=99.19% 00:36:01.296 cpu : usr=98.94%, sys=0.75%, ctx=77, majf=0, minf=9 00:36:01.296 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:01.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.296 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.297 filename1: (groupid=0, jobs=1): err= 0: pid=3034170: Thu Dec 5 14:26:06 2024 00:36:01.297 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.0MiB/10003msec) 00:36:01.297 slat (nsec): min=5810, max=94526, avg=27911.47, stdev=15690.23 00:36:01.297 clat (usec): min=3109, max=46085, avg=23741.17, stdev=1695.20 00:36:01.297 lat (usec): min=3116, max=46106, avg=23769.08, stdev=1695.98 00:36:01.297 clat percentiles (usec): 00:36:01.297 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:36:01.297 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:01.297 | 70.00th=[23987], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:01.297 | 99.00th=[24773], 99.50th=[25035], 99.90th=[45876], 99.95th=[45876], 00:36:01.297 | 99.99th=[45876] 00:36:01.297 bw ( KiB/s): min= 2436, max= 2693, per=4.16%, avg=2654.47, stdev=71.24, samples=19 00:36:01.297 iops : min= 609, max= 673, avg=663.58, stdev=17.79, samples=19 00:36:01.297 lat (msec) : 4=0.15%, 10=0.24%, 20=0.48%, 50=99.13% 00:36:01.297 cpu : usr=98.64%, sys=0.94%, ctx=128, majf=0, minf=9 00:36:01.297 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:01.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 issued rwts: total=6666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.297 filename1: (groupid=0, jobs=1): err= 0: pid=3034171: Thu Dec 5 14:26:06 2024 00:36:01.297 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:36:01.297 slat (nsec): min=5568, max=72622, avg=16271.84, stdev=12657.09 00:36:01.297 clat (usec): min=4955, max=53244, avg=23918.76, stdev=2228.44 00:36:01.297 lat (usec): min=4961, max=53264, avg=23935.03, stdev=2228.61 00:36:01.297 clat percentiles (usec): 00:36:01.297 | 1.00th=[16450], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.297 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.297 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24511], 00:36:01.297 | 99.00th=[29230], 99.50th=[33817], 99.90th=[53216], 99.95th=[53216], 00:36:01.297 | 99.99th=[53216] 00:36:01.297 bw ( KiB/s): min= 2432, max= 2698, per=4.15%, avg=2650.05, stdev=67.21, samples=19 00:36:01.297 iops : min= 608, max= 674, avg=662.47, stdev=16.77, samples=19 00:36:01.297 lat (msec) : 10=0.39%, 20=1.56%, 50=97.81%, 100=0.24% 00:36:01.297 cpu : usr=98.73%, sys=0.99%, ctx=19, majf=0, minf=9 00:36:01.297 IO depths : 1=3.4%, 2=6.7%, 4=13.9%, 8=64.3%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:01.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 complete : 0=0.0%, 4=91.8%, 8=4.9%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.297 filename1: (groupid=0, jobs=1): err= 0: pid=3034172: Thu Dec 5 14:26:06 2024 00:36:01.297 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10011msec) 00:36:01.297 slat (nsec): min=5709, max=60623, avg=8667.06, stdev=4798.81 00:36:01.297 clat (usec): min=8900, max=25162, avg=23816.15, stdev=1493.66 00:36:01.297 lat (usec): min=8932, max=25169, avg=23824.82, stdev=1491.69 00:36:01.297 clat percentiles (usec): 00:36:01.297 | 1.00th=[14353], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.297 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.297 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.297 | 99.00th=[24773], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:36:01.297 | 99.99th=[25035] 00:36:01.297 bw ( KiB/s): min= 2560, max= 2944, per=4.20%, avg=2681.26, stdev=79.52, samples=19 00:36:01.297 iops : min= 640, max= 736, avg=670.32, stdev=19.88, samples=19 00:36:01.297 lat (msec) : 10=0.37%, 20=1.06%, 50=98.57% 00:36:01.297 cpu : usr=98.55%, sys=1.06%, ctx=110, majf=0, minf=9 00:36:01.297 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:01.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.297 filename1: (groupid=0, jobs=1): err= 0: pid=3034173: Thu Dec 5 14:26:06 2024 00:36:01.297 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.2MiB/10060msec) 00:36:01.297 slat (nsec): min=5705, max=79487, avg=12389.96, stdev=9285.99 00:36:01.297 clat (usec): min=8856, max=61365, avg=23860.71, stdev=2594.94 00:36:01.297 lat (usec): min=8862, max=61373, avg=23873.10, stdev=2594.44 00:36:01.297 clat percentiles (usec): 00:36:01.297 | 1.00th=[14484], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.297 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.297 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.297 | 99.00th=[32113], 99.50th=[32900], 99.90th=[61080], 99.95th=[61604], 00:36:01.297 | 99.99th=[61604] 00:36:01.297 bw ( KiB/s): min= 2560, max= 2816, per=4.20%, avg=2680.00, stdev=61.15, samples=20 00:36:01.297 iops : min= 640, max= 704, avg=670.00, stdev=15.29, samples=20 00:36:01.297 lat (msec) : 10=0.18%, 20=3.32%, 50=96.29%, 100=0.21% 00:36:01.297 cpu : usr=96.99%, sys=2.00%, ctx=800, majf=0, minf=9 00:36:01.297 IO depths : 1=5.0%, 2=10.9%, 4=24.0%, 8=52.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:36:01.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 issued rwts: total=6714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.297 filename1: (groupid=0, jobs=1): err= 0: pid=3034174: Thu Dec 5 14:26:06 2024 00:36:01.297 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.1MiB/10014msec) 00:36:01.297 slat (nsec): min=5732, max=62124, avg=10799.37, stdev=6937.20 00:36:01.297 clat (usec): min=10104, max=41945, avg=23904.37, stdev=1419.89 00:36:01.297 lat (usec): min=10111, max=41952, avg=23915.17, stdev=1419.65 00:36:01.297 clat percentiles (usec): 00:36:01.297 | 1.00th=[19006], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.297 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.297 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.297 | 99.00th=[25035], 99.50th=[25035], 99.90th=[40109], 99.95th=[41157], 00:36:01.297 | 99.99th=[42206] 00:36:01.297 bw ( KiB/s): min= 2560, max= 2688, per=4.18%, avg=2667.79, stdev=47.95, samples=19 00:36:01.297 iops : min= 640, max= 672, avg=666.95, stdev=11.99, samples=19 00:36:01.297 lat (msec) : 20=1.17%, 50=98.83% 00:36:01.297 cpu : usr=98.62%, sys=0.96%, ctx=136, majf=0, minf=9 00:36:01.297 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:01.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.297 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.298 filename1: (groupid=0, jobs=1): err= 0: pid=3034175: Thu Dec 5 14:26:06 2024 00:36:01.298 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.0MiB/10006msec) 00:36:01.298 slat (nsec): min=5815, max=83868, avg=23585.54, stdev=12241.45 00:36:01.298 clat (usec): min=14955, max=36651, avg=23830.36, stdev=854.83 00:36:01.298 lat (usec): min=14961, max=36666, avg=23853.95, stdev=854.59 00:36:01.298 clat percentiles (usec): 00:36:01.298 | 1.00th=[20317], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.298 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:01.298 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.298 | 99.00th=[25035], 99.50th=[26608], 99.90th=[36439], 99.95th=[36439], 00:36:01.298 | 99.99th=[36439] 00:36:01.298 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2662.63, stdev=47.05, samples=19 00:36:01.298 iops : min= 640, max= 672, avg=665.58, stdev=11.73, samples=19 00:36:01.298 lat (msec) : 20=0.84%, 50=99.16% 00:36:01.298 cpu : usr=98.61%, sys=1.02%, ctx=92, majf=0, minf=9 00:36:01.298 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:01.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 issued rwts: total=6662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.298 filename1: (groupid=0, jobs=1): err= 0: pid=3034176: Thu Dec 5 14:26:06 2024 00:36:01.298 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10009msec) 00:36:01.298 slat (nsec): min=5832, max=83432, avg=22322.64, stdev=13562.89 00:36:01.298 clat (usec): min=10399, max=38708, avg=23773.18, stdev=1732.78 00:36:01.298 lat (usec): min=10407, max=38727, avg=23795.50, stdev=1733.51 00:36:01.298 clat percentiles (usec): 00:36:01.298 | 1.00th=[16057], 5.00th=[21890], 10.00th=[23462], 20.00th=[23725], 00:36:01.298 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.298 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.298 | 99.00th=[29492], 99.50th=[31327], 99.90th=[38536], 99.95th=[38536], 00:36:01.298 | 99.99th=[38536] 00:36:01.298 bw ( KiB/s): min= 2560, max= 2784, per=4.18%, avg=2667.47, stdev=60.41, samples=19 00:36:01.298 iops : min= 640, max= 696, avg=666.84, stdev=15.07, samples=19 00:36:01.298 lat (msec) : 20=2.98%, 50=97.02% 00:36:01.298 cpu : usr=98.51%, sys=1.04%, ctx=101, majf=0, minf=9 00:36:01.298 IO depths : 1=3.3%, 2=8.3%, 4=20.8%, 8=57.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:36:01.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 complete : 0=0.0%, 4=93.3%, 8=1.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.298 filename2: (groupid=0, jobs=1): err= 0: pid=3034177: Thu Dec 5 14:26:06 2024 00:36:01.298 read: IOPS=673, BW=2696KiB/s (2761kB/s)(26.4MiB/10018msec) 00:36:01.298 slat (nsec): min=5715, max=83761, avg=12118.36, stdev=9186.54 00:36:01.298 clat (usec): min=10299, max=35356, avg=23641.52, stdev=2070.69 00:36:01.298 lat (usec): min=10306, max=35362, avg=23653.64, stdev=2071.10 00:36:01.298 clat percentiles (usec): 00:36:01.298 | 1.00th=[14091], 5.00th=[19530], 10.00th=[23462], 20.00th=[23725], 00:36:01.298 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.298 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.298 | 99.00th=[30278], 99.50th=[32637], 99.90th=[35390], 99.95th=[35390], 00:36:01.298 | 99.99th=[35390] 00:36:01.298 bw ( KiB/s): min= 2560, max= 2912, per=4.22%, avg=2693.79, stdev=80.64, samples=19 00:36:01.298 iops : min= 640, max= 728, avg=673.37, stdev=20.12, samples=19 00:36:01.298 lat (msec) : 20=5.55%, 50=94.45% 00:36:01.298 cpu : usr=98.96%, sys=0.76%, ctx=16, majf=0, minf=9 00:36:01.298 IO depths : 1=5.1%, 2=10.8%, 4=23.0%, 8=53.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:36:01.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.298 filename2: (groupid=0, jobs=1): err= 0: pid=3034178: Thu Dec 5 14:26:06 2024 00:36:01.298 read: IOPS=666, BW=2666KiB/s (2730kB/s)(26.0MiB/10003msec) 00:36:01.298 slat (nsec): min=5780, max=95834, avg=27651.85, stdev=16237.07 00:36:01.298 clat (usec): min=2962, max=45874, avg=23734.47, stdev=1672.13 00:36:01.298 lat (usec): min=2968, max=45895, avg=23762.13, stdev=1672.96 00:36:01.298 clat percentiles (usec): 00:36:01.298 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23462], 20.00th=[23462], 00:36:01.298 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23725], 00:36:01.298 | 70.00th=[23987], 80.00th=[23987], 90.00th=[23987], 95.00th=[24249], 00:36:01.298 | 99.00th=[24773], 99.50th=[25035], 99.90th=[45876], 99.95th=[45876], 00:36:01.298 | 99.99th=[45876] 00:36:01.298 bw ( KiB/s): min= 2436, max= 2693, per=4.16%, avg=2654.47, stdev=71.24, samples=19 00:36:01.298 iops : min= 609, max= 673, avg=663.58, stdev=17.79, samples=19 00:36:01.298 lat (msec) : 4=0.15%, 10=0.24%, 20=0.48%, 50=99.13% 00:36:01.298 cpu : usr=98.83%, sys=0.92%, ctx=13, majf=0, minf=9 00:36:01.298 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:01.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 issued rwts: total=6666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.298 filename2: (groupid=0, jobs=1): err= 0: pid=3034179: Thu Dec 5 14:26:06 2024 00:36:01.298 read: IOPS=665, BW=2663KiB/s (2727kB/s)(26.0MiB/10011msec) 00:36:01.298 slat (nsec): min=5744, max=68900, avg=17268.17, stdev=10638.75 00:36:01.298 clat (usec): min=10488, max=35504, avg=23864.41, stdev=1038.80 00:36:01.298 lat (usec): min=10494, max=35521, avg=23881.68, stdev=1038.83 00:36:01.298 clat percentiles (usec): 00:36:01.298 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.298 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:36:01.298 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.298 | 99.00th=[24773], 99.50th=[25035], 99.90th=[35390], 99.95th=[35390], 00:36:01.298 | 99.99th=[35390] 00:36:01.298 bw ( KiB/s): min= 2560, max= 2688, per=4.16%, avg=2654.26, stdev=57.29, samples=19 00:36:01.298 iops : min= 640, max= 672, avg=663.53, stdev=14.33, samples=19 00:36:01.298 lat (msec) : 20=0.63%, 50=99.37% 00:36:01.298 cpu : usr=98.60%, sys=1.02%, ctx=142, majf=0, minf=9 00:36:01.298 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:01.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.298 issued rwts: total=6666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.298 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.298 filename2: (groupid=0, jobs=1): err= 0: pid=3034180: Thu Dec 5 14:26:06 2024 00:36:01.299 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10002msec) 00:36:01.299 slat (nsec): min=5712, max=69996, avg=14406.70, stdev=10701.46 00:36:01.299 clat (usec): min=16425, max=30135, avg=23926.87, stdev=626.06 00:36:01.299 lat (usec): min=16435, max=30156, avg=23941.28, stdev=624.95 00:36:01.299 clat percentiles (usec): 00:36:01.299 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.299 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.299 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.299 | 99.00th=[25035], 99.50th=[25035], 99.90th=[30016], 99.95th=[30016], 00:36:01.299 | 99.99th=[30016] 00:36:01.299 bw ( KiB/s): min= 2560, max= 2688, per=4.17%, avg=2660.42, stdev=53.31, samples=19 00:36:01.299 iops : min= 640, max= 672, avg=665.05, stdev=13.31, samples=19 00:36:01.299 lat (msec) : 20=0.48%, 50=99.52% 00:36:01.299 cpu : usr=98.70%, sys=0.89%, ctx=68, majf=0, minf=9 00:36:01.299 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:36:01.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.299 filename2: (groupid=0, jobs=1): err= 0: pid=3034181: Thu Dec 5 14:26:06 2024 00:36:01.299 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10004msec) 00:36:01.299 slat (nsec): min=5680, max=87445, avg=25758.61, stdev=15128.47 00:36:01.299 clat (usec): min=3315, max=46641, avg=23782.61, stdev=1790.11 00:36:01.299 lat (usec): min=3321, max=46661, avg=23808.37, stdev=1790.33 00:36:01.299 clat percentiles (usec): 00:36:01.299 | 1.00th=[21890], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.299 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:01.299 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:01.299 | 99.00th=[25035], 99.50th=[25035], 99.90th=[46400], 99.95th=[46400], 00:36:01.299 | 99.99th=[46400] 00:36:01.299 bw ( KiB/s): min= 2432, max= 2693, per=4.16%, avg=2654.26, stdev=71.93, samples=19 00:36:01.299 iops : min= 608, max= 673, avg=663.53, stdev=17.96, samples=19 00:36:01.299 lat (msec) : 4=0.21%, 10=0.24%, 20=0.48%, 50=99.07% 00:36:01.299 cpu : usr=98.85%, sys=0.89%, ctx=13, majf=0, minf=9 00:36:01.299 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:01.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 issued rwts: total=6670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.299 filename2: (groupid=0, jobs=1): err= 0: pid=3034182: Thu Dec 5 14:26:06 2024 00:36:01.299 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10007msec) 00:36:01.299 slat (nsec): min=5778, max=95324, avg=12274.46, stdev=11138.98 00:36:01.299 clat (usec): min=5923, max=28131, avg=23593.83, stdev=2022.15 00:36:01.299 lat (usec): min=5932, max=28141, avg=23606.11, stdev=2021.43 00:36:01.299 clat percentiles (usec): 00:36:01.299 | 1.00th=[10945], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.299 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.299 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24511], 00:36:01.299 | 99.00th=[24773], 99.50th=[25297], 99.90th=[27657], 99.95th=[28181], 00:36:01.299 | 99.99th=[28181] 00:36:01.299 bw ( KiB/s): min= 2560, max= 3376, per=4.24%, avg=2704.00, stdev=168.99, samples=19 00:36:01.299 iops : min= 640, max= 844, avg=676.00, stdev=42.25, samples=19 00:36:01.299 lat (msec) : 10=0.67%, 20=3.00%, 50=96.33% 00:36:01.299 cpu : usr=98.85%, sys=0.88%, ctx=27, majf=0, minf=9 00:36:01.299 IO depths : 1=5.8%, 2=11.8%, 4=24.1%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:01.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.299 filename2: (groupid=0, jobs=1): err= 0: pid=3034183: Thu Dec 5 14:26:06 2024 00:36:01.299 read: IOPS=665, BW=2661KiB/s (2724kB/s)(26.0MiB/10004msec) 00:36:01.299 slat (nsec): min=5666, max=97912, avg=24952.56, stdev=14838.73 00:36:01.299 clat (usec): min=8022, max=41164, avg=23824.41, stdev=1395.74 00:36:01.299 lat (usec): min=8028, max=41182, avg=23849.36, stdev=1395.76 00:36:01.299 clat percentiles (usec): 00:36:01.299 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:36:01.299 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:36:01.299 | 70.00th=[23987], 80.00th=[23987], 90.00th=[24249], 95.00th=[24249], 00:36:01.299 | 99.00th=[24773], 99.50th=[31851], 99.90th=[41157], 99.95th=[41157], 00:36:01.299 | 99.99th=[41157] 00:36:01.299 bw ( KiB/s): min= 2432, max= 2688, per=4.16%, avg=2653.68, stdev=71.64, samples=19 00:36:01.299 iops : min= 608, max= 672, avg=663.37, stdev=17.89, samples=19 00:36:01.299 lat (msec) : 10=0.21%, 20=0.57%, 50=99.22% 00:36:01.299 cpu : usr=99.02%, sys=0.71%, ctx=44, majf=0, minf=9 00:36:01.299 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:36:01.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 issued rwts: total=6654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.299 filename2: (groupid=0, jobs=1): err= 0: pid=3034184: Thu Dec 5 14:26:06 2024 00:36:01.299 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10020msec) 00:36:01.299 slat (usec): min=5, max=105, avg= 9.62, stdev= 8.22 00:36:01.299 clat (usec): min=9560, max=31796, avg=23792.25, stdev=1819.84 00:36:01.299 lat (usec): min=9567, max=31802, avg=23801.87, stdev=1818.78 00:36:01.299 clat percentiles (usec): 00:36:01.299 | 1.00th=[13829], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:36:01.299 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:36:01.299 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24249], 95.00th=[24511], 00:36:01.299 | 99.00th=[31065], 99.50th=[31589], 99.90th=[31851], 99.95th=[31851], 00:36:01.299 | 99.99th=[31851] 00:36:01.299 bw ( KiB/s): min= 2560, max= 2816, per=4.20%, avg=2680.40, stdev=58.35, samples=20 00:36:01.299 iops : min= 640, max= 704, avg=670.10, stdev=14.59, samples=20 00:36:01.299 lat (msec) : 10=0.10%, 20=3.02%, 50=96.87% 00:36:01.299 cpu : usr=98.84%, sys=0.88%, ctx=39, majf=0, minf=9 00:36:01.299 IO depths : 1=5.6%, 2=11.6%, 4=24.2%, 8=51.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:01.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:01.299 issued rwts: total=6715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:01.299 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:01.299 00:36:01.299 Run status group 0 (all jobs): 00:36:01.300 READ: bw=62.3MiB/s (65.4MB/s), 2661KiB/s-2748KiB/s (2724kB/s-2814kB/s), io=627MiB (658MB), run=10001-10060msec 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 bdev_null0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 [2024-12-05 14:26:06.521021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 bdev_null1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.300 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:01.301 { 00:36:01.301 "params": { 00:36:01.301 "name": "Nvme$subsystem", 00:36:01.301 "trtype": "$TEST_TRANSPORT", 00:36:01.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.301 "adrfam": "ipv4", 00:36:01.301 "trsvcid": "$NVMF_PORT", 00:36:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.301 "hdgst": ${hdgst:-false}, 00:36:01.301 "ddgst": ${ddgst:-false} 00:36:01.301 }, 00:36:01.301 "method": "bdev_nvme_attach_controller" 00:36:01.301 } 00:36:01.301 EOF 00:36:01.301 )") 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:01.301 { 00:36:01.301 "params": { 00:36:01.301 "name": "Nvme$subsystem", 00:36:01.301 "trtype": "$TEST_TRANSPORT", 00:36:01.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:01.301 "adrfam": "ipv4", 00:36:01.301 "trsvcid": "$NVMF_PORT", 00:36:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:01.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:01.301 "hdgst": ${hdgst:-false}, 00:36:01.301 "ddgst": ${ddgst:-false} 00:36:01.301 }, 00:36:01.301 "method": "bdev_nvme_attach_controller" 00:36:01.301 } 00:36:01.301 EOF 00:36:01.301 )") 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:01.301 "params": { 00:36:01.301 "name": "Nvme0", 00:36:01.301 "trtype": "tcp", 00:36:01.301 "traddr": "10.0.0.2", 00:36:01.301 "adrfam": "ipv4", 00:36:01.301 "trsvcid": "4420", 00:36:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.301 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.301 "hdgst": false, 00:36:01.301 "ddgst": false 00:36:01.301 }, 00:36:01.301 "method": "bdev_nvme_attach_controller" 00:36:01.301 },{ 00:36:01.301 "params": { 00:36:01.301 "name": "Nvme1", 00:36:01.301 "trtype": "tcp", 00:36:01.301 "traddr": "10.0.0.2", 00:36:01.301 "adrfam": "ipv4", 00:36:01.301 "trsvcid": "4420", 00:36:01.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:01.301 "hdgst": false, 00:36:01.301 "ddgst": false 00:36:01.301 }, 00:36:01.301 "method": "bdev_nvme_attach_controller" 00:36:01.301 }' 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:01.301 14:26:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:01.301 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:01.301 ... 00:36:01.301 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:01.301 ... 00:36:01.301 fio-3.35 00:36:01.301 Starting 4 threads 00:36:06.586 00:36:06.586 filename0: (groupid=0, jobs=1): err= 0: pid=3036383: Thu Dec 5 14:26:12 2024 00:36:06.586 read: IOPS=2944, BW=23.0MiB/s (24.1MB/s)(115MiB/5002msec) 00:36:06.586 slat (nsec): min=5570, max=54506, avg=8788.25, stdev=4043.11 00:36:06.586 clat (usec): min=1060, max=4929, avg=2695.21, stdev=203.95 00:36:06.586 lat (usec): min=1076, max=4944, avg=2704.00, stdev=203.76 00:36:06.586 clat percentiles (usec): 00:36:06.586 | 1.00th=[ 2040], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2638], 00:36:06.586 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:36:06.586 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2802], 95.00th=[ 2966], 00:36:06.586 | 99.00th=[ 3326], 99.50th=[ 3556], 99.90th=[ 4047], 99.95th=[ 4146], 00:36:06.586 | 99.99th=[ 4883] 00:36:06.586 bw ( KiB/s): min=23440, max=23664, per=25.25%, avg=23559.11, stdev=83.57, samples=9 00:36:06.586 iops : min= 2930, max= 2958, avg=2944.89, stdev=10.45, samples=9 00:36:06.586 lat (msec) : 2=0.84%, 4=99.04%, 10=0.12% 00:36:06.586 cpu : usr=96.26%, sys=3.48%, ctx=8, majf=0, minf=26 00:36:06.586 IO depths : 1=0.1%, 2=0.3%, 4=68.4%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.586 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.586 issued rwts: total=14726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.586 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.586 filename0: (groupid=0, jobs=1): err= 0: pid=3036384: Thu Dec 5 14:26:12 2024 00:36:06.586 read: IOPS=2918, BW=22.8MiB/s (23.9MB/s)(114MiB/5005msec) 00:36:06.586 slat (nsec): min=5530, max=87155, avg=7003.00, stdev=3458.98 00:36:06.586 clat (usec): min=959, max=6709, avg=2722.25, stdev=245.00 00:36:06.586 lat (usec): min=969, max=6716, avg=2729.25, stdev=245.14 00:36:06.586 clat percentiles (usec): 00:36:06.586 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:36:06.586 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:06.586 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2999], 00:36:06.586 | 99.00th=[ 3851], 99.50th=[ 4080], 99.90th=[ 4817], 99.95th=[ 6456], 00:36:06.586 | 99.99th=[ 6718] 00:36:06.586 bw ( KiB/s): min=23120, max=23584, per=25.05%, avg=23369.40, stdev=186.23, samples=10 00:36:06.586 iops : min= 2890, max= 2948, avg=2921.10, stdev=23.36, samples=10 00:36:06.586 lat (usec) : 1000=0.02% 00:36:06.586 lat (msec) : 2=0.53%, 4=98.80%, 10=0.65% 00:36:06.586 cpu : usr=97.04%, sys=2.72%, ctx=9, majf=0, minf=72 00:36:06.586 IO depths : 1=0.1%, 2=0.2%, 4=70.2%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.586 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.586 issued rwts: total=14608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.586 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.586 filename1: (groupid=0, jobs=1): err= 0: pid=3036385: Thu Dec 5 14:26:12 2024 00:36:06.586 read: IOPS=2890, BW=22.6MiB/s (23.7MB/s)(113MiB/5005msec) 00:36:06.586 slat (nsec): min=5529, max=83210, avg=7042.48, stdev=3677.88 00:36:06.586 clat (usec): min=1154, max=8006, avg=2747.14, stdev=315.70 00:36:06.586 lat (usec): min=1159, max=8012, avg=2754.18, stdev=315.80 00:36:06.586 clat percentiles (usec): 00:36:06.586 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2671], 00:36:06.586 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:06.586 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 3228], 00:36:06.586 | 99.00th=[ 4080], 99.50th=[ 4228], 99.90th=[ 4686], 99.95th=[ 6456], 00:36:06.586 | 99.99th=[ 8029] 00:36:06.586 bw ( KiB/s): min=22896, max=23392, per=24.80%, avg=23139.20, stdev=158.18, samples=10 00:36:06.586 iops : min= 2862, max= 2924, avg=2892.40, stdev=19.77, samples=10 00:36:06.586 lat (msec) : 2=0.59%, 4=97.91%, 10=1.49% 00:36:06.586 cpu : usr=97.66%, sys=2.08%, ctx=9, majf=0, minf=38 00:36:06.586 IO depths : 1=0.1%, 2=0.2%, 4=73.2%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.586 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.587 issued rwts: total=14468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.587 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.587 filename1: (groupid=0, jobs=1): err= 0: pid=3036386: Thu Dec 5 14:26:12 2024 00:36:06.587 read: IOPS=2910, BW=22.7MiB/s (23.8MB/s)(114MiB/5004msec) 00:36:06.587 slat (nsec): min=5528, max=84672, avg=6528.71, stdev=2946.91 00:36:06.587 clat (usec): min=1203, max=6482, avg=2730.45, stdev=240.14 00:36:06.587 lat (usec): min=1209, max=6488, avg=2736.97, stdev=240.32 00:36:06.587 clat percentiles (usec): 00:36:06.587 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2671], 00:36:06.587 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:36:06.587 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2835], 95.00th=[ 2999], 00:36:06.587 | 99.00th=[ 3916], 99.50th=[ 4047], 99.90th=[ 4948], 99.95th=[ 6063], 00:36:06.587 | 99.99th=[ 6456] 00:36:06.587 bw ( KiB/s): min=23040, max=23520, per=24.97%, avg=23292.60, stdev=159.83, samples=10 00:36:06.587 iops : min= 2880, max= 2940, avg=2911.50, stdev=20.06, samples=10 00:36:06.587 lat (msec) : 2=0.36%, 4=98.97%, 10=0.67% 00:36:06.587 cpu : usr=97.28%, sys=2.48%, ctx=7, majf=0, minf=46 00:36:06.587 IO depths : 1=0.1%, 2=0.1%, 4=74.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:06.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.587 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:06.587 issued rwts: total=14563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:06.587 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:06.587 00:36:06.587 Run status group 0 (all jobs): 00:36:06.587 READ: bw=91.1MiB/s (95.5MB/s), 22.6MiB/s-23.0MiB/s (23.7MB/s-24.1MB/s), io=456MiB (478MB), run=5002-5005msec 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 00:36:06.848 real 0m24.448s 00:36:06.848 user 5m16.678s 00:36:06.848 sys 0m4.728s 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.848 14:26:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 ************************************ 00:36:06.848 END TEST fio_dif_rand_params 00:36:06.848 ************************************ 00:36:06.848 14:26:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:06.848 14:26:12 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:06.848 14:26:12 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.848 14:26:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 ************************************ 00:36:06.848 START TEST fio_dif_digest 00:36:06.848 ************************************ 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 bdev_null0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:06.848 [2024-12-05 14:26:13.065265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:06.848 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:06.848 { 00:36:06.848 "params": { 00:36:06.848 "name": "Nvme$subsystem", 00:36:06.848 "trtype": "$TEST_TRANSPORT", 00:36:06.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "$NVMF_PORT", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.849 "hdgst": ${hdgst:-false}, 00:36:06.849 "ddgst": ${ddgst:-false} 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 } 00:36:06.849 EOF 00:36:06.849 )") 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:06.849 "params": { 00:36:06.849 "name": "Nvme0", 00:36:06.849 "trtype": "tcp", 00:36:06.849 "traddr": "10.0.0.2", 00:36:06.849 "adrfam": "ipv4", 00:36:06.849 "trsvcid": "4420", 00:36:06.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:06.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:06.849 "hdgst": true, 00:36:06.849 "ddgst": true 00:36:06.849 }, 00:36:06.849 "method": "bdev_nvme_attach_controller" 00:36:06.849 }' 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:06.849 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:07.136 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:07.136 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:07.136 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:07.136 14:26:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:07.401 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:07.401 ... 00:36:07.401 fio-3.35 00:36:07.401 Starting 3 threads 00:36:19.642 00:36:19.642 filename0: (groupid=0, jobs=1): err= 0: pid=3037876: Thu Dec 5 14:26:23 2024 00:36:19.642 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(374MiB/10045msec) 00:36:19.642 slat (nsec): min=5914, max=32498, avg=6598.86, stdev=1041.16 00:36:19.642 clat (usec): min=6053, max=49139, avg=10027.96, stdev=1063.30 00:36:19.642 lat (usec): min=6059, max=49145, avg=10034.56, stdev=1063.30 00:36:19.642 clat percentiles (usec): 00:36:19.642 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:36:19.642 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:36:19.642 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:36:19.642 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12518], 99.95th=[13173], 00:36:19.642 | 99.99th=[49021] 00:36:19.642 bw ( KiB/s): min=37632, max=39680, per=34.56%, avg=38310.40, stdev=565.00, samples=20 00:36:19.642 iops : min= 294, max= 310, avg=299.30, stdev= 4.41, samples=20 00:36:19.642 lat (msec) : 10=49.00%, 20=50.97%, 50=0.03% 00:36:19.642 cpu : usr=94.62%, sys=5.14%, ctx=12, majf=0, minf=139 00:36:19.642 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.642 issued rwts: total=2994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.642 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:19.642 filename0: (groupid=0, jobs=1): err= 0: pid=3037877: Thu Dec 5 14:26:23 2024 00:36:19.642 read: IOPS=279, BW=35.0MiB/s (36.7MB/s)(351MiB/10045msec) 00:36:19.642 slat (nsec): min=5905, max=32219, avg=6646.70, stdev=1218.90 00:36:19.642 clat (usec): min=8080, max=52595, avg=10700.54, stdev=1879.34 00:36:19.642 lat (usec): min=8087, max=52602, avg=10707.19, stdev=1879.34 00:36:19.642 clat percentiles (usec): 00:36:19.642 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:36:19.642 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:36:19.642 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:36:19.642 | 99.00th=[12780], 99.50th=[13173], 99.90th=[51119], 99.95th=[51643], 00:36:19.642 | 99.99th=[52691] 00:36:19.642 bw ( KiB/s): min=32702, max=36864, per=32.42%, avg=35939.10, stdev=927.23, samples=20 00:36:19.642 iops : min= 255, max= 288, avg=280.75, stdev= 7.33, samples=20 00:36:19.642 lat (msec) : 10=21.89%, 20=77.94%, 50=0.04%, 100=0.14% 00:36:19.642 cpu : usr=94.51%, sys=5.26%, ctx=13, majf=0, minf=121 00:36:19.642 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.642 issued rwts: total=2810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.642 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:19.642 filename0: (groupid=0, jobs=1): err= 0: pid=3037878: Thu Dec 5 14:26:23 2024 00:36:19.642 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(362MiB/10047msec) 00:36:19.642 slat (nsec): min=5946, max=32897, avg=6638.33, stdev=1030.12 00:36:19.642 clat (usec): min=6877, max=47360, avg=10377.54, stdev=1253.33 00:36:19.642 lat (usec): min=6884, max=47370, avg=10384.18, stdev=1253.38 00:36:19.642 clat percentiles (usec): 00:36:19.642 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:36:19.642 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:36:19.642 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:36:19.642 | 99.00th=[12256], 99.50th=[12649], 99.90th=[13698], 99.95th=[46924], 00:36:19.642 | 99.99th=[47449] 00:36:19.642 bw ( KiB/s): min=35840, max=38067, per=33.43%, avg=37064.95, stdev=577.71, samples=20 00:36:19.642 iops : min= 280, max= 297, avg=289.55, stdev= 4.48, samples=20 00:36:19.642 lat (msec) : 10=32.82%, 20=67.12%, 50=0.07% 00:36:19.642 cpu : usr=94.64%, sys=5.12%, ctx=20, majf=0, minf=124 00:36:19.642 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:19.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:19.642 issued rwts: total=2898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:19.642 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:19.642 00:36:19.642 Run status group 0 (all jobs): 00:36:19.642 READ: bw=108MiB/s (114MB/s), 35.0MiB/s-37.3MiB/s (36.7MB/s-39.1MB/s), io=1088MiB (1141MB), run=10045-10047msec 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.642 00:36:19.642 real 0m11.128s 00:36:19.642 user 0m41.953s 00:36:19.642 sys 0m1.882s 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.642 14:26:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:19.642 ************************************ 00:36:19.643 END TEST fio_dif_digest 00:36:19.643 ************************************ 00:36:19.643 14:26:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:19.643 14:26:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.643 rmmod nvme_tcp 00:36:19.643 rmmod nvme_fabrics 00:36:19.643 rmmod nvme_keyring 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3027522 ']' 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3027522 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3027522 ']' 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3027522 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3027522 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3027522' 00:36:19.643 killing process with pid 3027522 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3027522 00:36:19.643 14:26:24 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3027522 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:19.643 14:26:24 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:21.558 Waiting for block devices as requested 00:36:21.558 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:21.819 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:21.819 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:21.819 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:22.080 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:22.080 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:22.080 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:22.342 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:22.342 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:22.342 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:22.602 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:22.602 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:22.602 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:22.863 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:22.863 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:22.863 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:23.124 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:23.124 14:26:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.124 14:26:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.124 14:26:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.041 14:26:31 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:25.041 00:36:25.041 real 1m17.490s 00:36:25.041 user 7m59.960s 00:36:25.041 sys 0m21.794s 00:36:25.041 14:26:31 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.041 14:26:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:25.041 ************************************ 00:36:25.041 END TEST nvmf_dif 00:36:25.041 ************************************ 00:36:25.302 14:26:31 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:25.302 14:26:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:25.302 14:26:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:25.302 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:36:25.302 ************************************ 00:36:25.302 START TEST nvmf_abort_qd_sizes 00:36:25.302 ************************************ 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:25.302 * Looking for test storage... 00:36:25.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:25.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.302 --rc genhtml_branch_coverage=1 00:36:25.302 --rc genhtml_function_coverage=1 00:36:25.302 --rc genhtml_legend=1 00:36:25.302 --rc geninfo_all_blocks=1 00:36:25.302 --rc geninfo_unexecuted_blocks=1 00:36:25.302 00:36:25.302 ' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:25.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.302 --rc genhtml_branch_coverage=1 00:36:25.302 --rc genhtml_function_coverage=1 00:36:25.302 --rc genhtml_legend=1 00:36:25.302 --rc geninfo_all_blocks=1 00:36:25.302 --rc geninfo_unexecuted_blocks=1 00:36:25.302 00:36:25.302 ' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:25.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.302 --rc genhtml_branch_coverage=1 00:36:25.302 --rc genhtml_function_coverage=1 00:36:25.302 --rc genhtml_legend=1 00:36:25.302 --rc geninfo_all_blocks=1 00:36:25.302 --rc geninfo_unexecuted_blocks=1 00:36:25.302 00:36:25.302 ' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:25.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:25.302 --rc genhtml_branch_coverage=1 00:36:25.302 --rc genhtml_function_coverage=1 00:36:25.302 --rc genhtml_legend=1 00:36:25.302 --rc geninfo_all_blocks=1 00:36:25.302 --rc geninfo_unexecuted_blocks=1 00:36:25.302 00:36:25.302 ' 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:25.302 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:25.303 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:25.303 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:25.303 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:25.303 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:25.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:25.564 14:26:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:33.711 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:33.711 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.711 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:33.711 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:33.712 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:33.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:36:33.712 00:36:33.712 --- 10.0.0.2 ping statistics --- 00:36:33.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.712 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:33.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:36:33.712 00:36:33.712 --- 10.0.0.1 ping statistics --- 00:36:33.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.712 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:33.712 14:26:38 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:36.256 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:36.256 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3047031 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3047031 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3047031 ']' 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:36.256 14:26:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:36.256 [2024-12-05 14:26:42.534143] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:36:36.256 [2024-12-05 14:26:42.534204] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:36.517 [2024-12-05 14:26:42.632074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:36.517 [2024-12-05 14:26:42.686682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:36.517 [2024-12-05 14:26:42.686736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:36.517 [2024-12-05 14:26:42.686745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:36.517 [2024-12-05 14:26:42.686753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:36.517 [2024-12-05 14:26:42.686759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:36.517 [2024-12-05 14:26:42.688874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:36.517 [2024-12-05 14:26:42.689032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:36.517 [2024-12-05 14:26:42.689193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.517 [2024-12-05 14:26:42.689193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:37.091 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:37.091 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:37.091 14:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:37.091 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:37.091 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:37.352 14:26:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.352 ************************************ 00:36:37.352 START TEST spdk_target_abort 00:36:37.352 ************************************ 00:36:37.352 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:37.352 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:37.352 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:36:37.352 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.352 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.613 spdk_targetn1 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.613 [2024-12-05 14:26:43.769326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.613 [2024-12-05 14:26:43.817655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.613 14:26:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.874 [2024-12-05 14:26:43.960128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:24 len:8 PRP1 0x200004abe000 PRP2 0x0 00:36:37.874 [2024-12-05 14:26:43.960161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0004 p:1 m:0 dnr:0 00:36:37.874 [2024-12-05 14:26:43.968623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:312 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:37.874 [2024-12-05 14:26:43.968647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0029 p:1 m:0 dnr:0 00:36:37.874 [2024-12-05 14:26:44.006962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1608 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:37.874 [2024-12-05 14:26:44.006985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cb p:1 m:0 dnr:0 00:36:37.874 [2024-12-05 14:26:44.015298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1904 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:36:37.874 [2024-12-05 14:26:44.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:36:37.874 [2024-12-05 14:26:44.016257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1952 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:36:37.874 [2024-12-05 14:26:44.016276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f5 p:1 m:0 dnr:0 00:36:37.874 [2024-12-05 14:26:44.062980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3312 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:36:37.874 [2024-12-05 14:26:44.063003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a1 p:0 m:0 dnr:0 00:36:41.172 Initializing NVMe Controllers 00:36:41.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.172 Initialization complete. Launching workers. 00:36:41.172 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12532, failed: 6 00:36:41.172 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2631, failed to submit 9907 00:36:41.172 success 737, unsuccessful 1894, failed 0 00:36:41.172 14:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.172 14:26:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.172 [2024-12-05 14:26:47.204602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:1608 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:36:41.172 [2024-12-05 14:26:47.204642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00d7 p:1 m:0 dnr:0 00:36:41.172 [2024-12-05 14:26:47.219817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1976 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:36:41.172 [2024-12-05 14:26:47.219840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00fe p:1 m:0 dnr:0 00:36:41.172 [2024-12-05 14:26:47.243609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2640 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:36:41.172 [2024-12-05 14:26:47.243630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:41.172 [2024-12-05 14:26:47.275583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3312 len:8 PRP1 0x200004e56000 PRP2 0x0 00:36:41.172 [2024-12-05 14:26:47.275605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:36:41.172 [2024-12-05 14:26:47.291546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:3712 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:36:41.172 [2024-12-05 14:26:47.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00da p:0 m:0 dnr:0 00:36:43.722 [2024-12-05 14:26:49.859241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:62184 len:8 PRP1 0x200004e50000 PRP2 0x0 00:36:43.722 [2024-12-05 14:26:49.859271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:006b p:1 m:0 dnr:0 00:36:43.982 Initializing NVMe Controllers 00:36:43.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:43.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:43.982 Initialization complete. Launching workers. 00:36:43.982 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8570, failed: 6 00:36:43.982 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1232, failed to submit 7344 00:36:43.982 success 349, unsuccessful 883, failed 0 00:36:43.982 14:26:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:43.982 14:26:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.278 Initializing NVMe Controllers 00:36:47.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.278 Initialization complete. Launching workers. 00:36:47.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43723, failed: 0 00:36:47.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2741, failed to submit 40982 00:36:47.278 success 586, unsuccessful 2155, failed 0 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.278 14:26:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3047031 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3047031 ']' 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3047031 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047031 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047031' 00:36:49.190 killing process with pid 3047031 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3047031 00:36:49.190 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3047031 00:36:49.450 00:36:49.450 real 0m12.083s 00:36:49.450 user 0m49.243s 00:36:49.450 sys 0m2.048s 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:49.450 ************************************ 00:36:49.450 END TEST spdk_target_abort 00:36:49.450 ************************************ 00:36:49.450 14:26:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:49.450 14:26:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:49.450 14:26:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:49.450 14:26:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:49.450 ************************************ 00:36:49.450 START TEST kernel_target_abort 00:36:49.450 ************************************ 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.450 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:49.451 14:26:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:52.857 Waiting for block devices as requested 00:36:52.857 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:52.857 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:52.857 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:53.117 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:53.117 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:53.117 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:53.378 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:53.378 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:53.378 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:53.639 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:53.639 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:53.639 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:53.901 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:53.901 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:53.901 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:53.901 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:54.163 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:54.163 No valid GPT data, bailing 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:54.163 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:36:54.164 00:36:54.164 Discovery Log Number of Records 2, Generation counter 2 00:36:54.164 =====Discovery Log Entry 0====== 00:36:54.164 trtype: tcp 00:36:54.164 adrfam: ipv4 00:36:54.164 subtype: current discovery subsystem 00:36:54.164 treq: not specified, sq flow control disable supported 00:36:54.164 portid: 1 00:36:54.164 trsvcid: 4420 00:36:54.164 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:54.164 traddr: 10.0.0.1 00:36:54.164 eflags: none 00:36:54.164 sectype: none 00:36:54.164 =====Discovery Log Entry 1====== 00:36:54.164 trtype: tcp 00:36:54.164 adrfam: ipv4 00:36:54.164 subtype: nvme subsystem 00:36:54.164 treq: not specified, sq flow control disable supported 00:36:54.164 portid: 1 00:36:54.164 trsvcid: 4420 00:36:54.164 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:54.164 traddr: 10.0.0.1 00:36:54.164 eflags: none 00:36:54.164 sectype: none 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:54.164 14:27:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:57.470 Initializing NVMe Controllers 00:36:57.470 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:57.470 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:57.470 Initialization complete. Launching workers. 00:36:57.470 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66712, failed: 0 00:36:57.470 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66712, failed to submit 0 00:36:57.470 success 0, unsuccessful 66712, failed 0 00:36:57.470 14:27:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:57.470 14:27:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:00.771 Initializing NVMe Controllers 00:37:00.771 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:00.771 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:00.771 Initialization complete. Launching workers. 00:37:00.771 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119701, failed: 0 00:37:00.771 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30146, failed to submit 89555 00:37:00.771 success 0, unsuccessful 30146, failed 0 00:37:00.771 14:27:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:00.771 14:27:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:04.076 Initializing NVMe Controllers 00:37:04.076 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:04.076 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:04.076 Initialization complete. Launching workers. 00:37:04.076 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145471, failed: 0 00:37:04.076 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36406, failed to submit 109065 00:37:04.076 success 0, unsuccessful 36406, failed 0 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:04.076 14:27:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:07.376 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:07.376 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:07.376 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:07.376 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:07.377 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:08.762 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:08.762 00:37:08.762 real 0m19.416s 00:37:08.762 user 0m9.657s 00:37:08.762 sys 0m5.526s 00:37:08.762 14:27:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:08.762 14:27:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:08.762 ************************************ 00:37:08.762 END TEST kernel_target_abort 00:37:08.762 ************************************ 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.022 rmmod nvme_tcp 00:37:09.022 rmmod nvme_fabrics 00:37:09.022 rmmod nvme_keyring 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3047031 ']' 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3047031 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3047031 ']' 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3047031 00:37:09.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3047031) - No such process 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3047031 is not found' 00:37:09.022 Process with pid 3047031 is not found 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:09.022 14:27:15 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:12.320 Waiting for block devices as requested 00:37:12.320 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:12.320 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:12.582 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:12.582 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:12.582 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:12.843 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:12.843 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:12.843 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:13.104 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:13.104 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:13.104 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:13.364 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:13.364 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:13.364 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:13.623 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:13.623 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:13.623 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:13.883 14:27:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.797 14:27:22 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:15.797 00:37:15.797 real 0m50.646s 00:37:15.797 user 1m4.054s 00:37:15.797 sys 0m18.183s 00:37:15.797 14:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:15.797 14:27:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:15.797 ************************************ 00:37:15.797 END TEST nvmf_abort_qd_sizes 00:37:15.797 ************************************ 00:37:15.797 14:27:22 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:15.797 14:27:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:15.797 14:27:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:15.797 14:27:22 -- common/autotest_common.sh@10 -- # set +x 00:37:16.059 ************************************ 00:37:16.059 START TEST keyring_file 00:37:16.059 ************************************ 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:16.059 * Looking for test storage... 00:37:16.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:16.059 14:27:22 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.059 --rc genhtml_branch_coverage=1 00:37:16.059 --rc genhtml_function_coverage=1 00:37:16.059 --rc genhtml_legend=1 00:37:16.059 --rc geninfo_all_blocks=1 00:37:16.059 --rc geninfo_unexecuted_blocks=1 00:37:16.059 00:37:16.059 ' 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.059 --rc genhtml_branch_coverage=1 00:37:16.059 --rc genhtml_function_coverage=1 00:37:16.059 --rc genhtml_legend=1 00:37:16.059 --rc geninfo_all_blocks=1 00:37:16.059 --rc geninfo_unexecuted_blocks=1 00:37:16.059 00:37:16.059 ' 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.059 --rc genhtml_branch_coverage=1 00:37:16.059 --rc genhtml_function_coverage=1 00:37:16.059 --rc genhtml_legend=1 00:37:16.059 --rc geninfo_all_blocks=1 00:37:16.059 --rc geninfo_unexecuted_blocks=1 00:37:16.059 00:37:16.059 ' 00:37:16.059 14:27:22 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:16.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:16.059 --rc genhtml_branch_coverage=1 00:37:16.059 --rc genhtml_function_coverage=1 00:37:16.059 --rc genhtml_legend=1 00:37:16.059 --rc geninfo_all_blocks=1 00:37:16.059 --rc geninfo_unexecuted_blocks=1 00:37:16.059 00:37:16.059 ' 00:37:16.059 14:27:22 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:16.059 14:27:22 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:16.059 14:27:22 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:16.059 14:27:22 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:16.059 14:27:22 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:16.059 14:27:22 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:16.060 14:27:22 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:16.060 14:27:22 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:16.060 14:27:22 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:16.060 14:27:22 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:16.060 14:27:22 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.060 14:27:22 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.060 14:27:22 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.060 14:27:22 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:16.060 14:27:22 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:16.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:16.060 14:27:22 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:16.060 14:27:22 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:16.060 14:27:22 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:16.060 14:27:22 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:16.060 14:27:22 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:16.060 14:27:22 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:16.060 14:27:22 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:16.060 14:27:22 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:16.060 14:27:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.060 14:27:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:16.060 14:27:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:16.060 14:27:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.l6XHvIgxQB 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.l6XHvIgxQB 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.l6XHvIgxQB 00:37:16.321 14:27:22 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.l6XHvIgxQB 00:37:16.321 14:27:22 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lYPlzM8tw2 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:16.321 14:27:22 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lYPlzM8tw2 00:37:16.321 14:27:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lYPlzM8tw2 00:37:16.321 14:27:22 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lYPlzM8tw2 00:37:16.321 14:27:22 keyring_file -- keyring/file.sh@30 -- # tgtpid=3057505 00:37:16.321 14:27:22 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3057505 00:37:16.321 14:27:22 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:16.321 14:27:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3057505 ']' 00:37:16.321 14:27:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.321 14:27:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.321 14:27:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.321 14:27:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.321 14:27:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:16.321 [2024-12-05 14:27:22.536888] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:37:16.322 [2024-12-05 14:27:22.536968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057505 ] 00:37:16.583 [2024-12-05 14:27:22.628006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.583 [2024-12-05 14:27:22.680871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:17.154 14:27:23 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.154 [2024-12-05 14:27:23.341368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:17.154 null0 00:37:17.154 [2024-12-05 14:27:23.373423] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:17.154 [2024-12-05 14:27:23.373929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.154 14:27:23 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:17.154 14:27:23 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.155 [2024-12-05 14:27:23.405487] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:17.155 request: 00:37:17.155 { 00:37:17.155 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.155 "secure_channel": false, 00:37:17.155 "listen_address": { 00:37:17.155 "trtype": "tcp", 00:37:17.155 "traddr": "127.0.0.1", 00:37:17.155 "trsvcid": "4420" 00:37:17.155 }, 00:37:17.155 "method": "nvmf_subsystem_add_listener", 00:37:17.155 "req_id": 1 00:37:17.155 } 00:37:17.155 Got JSON-RPC error response 00:37:17.155 response: 00:37:17.155 { 00:37:17.155 "code": -32602, 00:37:17.155 "message": "Invalid parameters" 00:37:17.155 } 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:17.155 14:27:23 keyring_file -- keyring/file.sh@47 -- # bperfpid=3057785 00:37:17.155 14:27:23 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3057785 /var/tmp/bperf.sock 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3057785 ']' 00:37:17.155 14:27:23 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:17.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:17.155 14:27:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.415 [2024-12-05 14:27:23.465716] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:37:17.415 [2024-12-05 14:27:23.465786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057785 ] 00:37:17.415 [2024-12-05 14:27:23.556821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.415 [2024-12-05 14:27:23.608857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.358 14:27:24 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.358 14:27:24 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:18.358 14:27:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:18.358 14:27:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:18.358 14:27:24 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lYPlzM8tw2 00:37:18.358 14:27:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lYPlzM8tw2 00:37:18.620 14:27:24 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:18.620 14:27:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:18.620 14:27:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.620 14:27:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.620 14:27:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.620 14:27:24 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.l6XHvIgxQB == \/\t\m\p\/\t\m\p\.\l\6\X\H\v\I\g\x\Q\B ]] 00:37:18.620 14:27:24 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:18.620 14:27:24 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:18.620 14:27:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.620 14:27:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:18.620 14:27:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:18.889 14:27:25 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.lYPlzM8tw2 == \/\t\m\p\/\t\m\p\.\l\Y\P\l\z\M\8\t\w\2 ]] 00:37:18.889 14:27:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:18.889 14:27:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:18.889 14:27:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:18.889 14:27:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:18.889 14:27:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:18.889 14:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.149 14:27:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:19.149 14:27:25 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:19.149 14:27:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:19.149 14:27:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.149 14:27:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.149 14:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.149 14:27:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:19.409 14:27:25 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:19.409 14:27:25 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.409 14:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:19.409 [2024-12-05 14:27:25.620528] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:19.409 nvme0n1 00:37:19.670 14:27:25 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.670 14:27:25 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:19.670 14:27:25 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:19.670 14:27:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:19.930 14:27:26 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:19.930 14:27:26 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:19.930 Running I/O for 1 seconds... 00:37:21.314 16861.00 IOPS, 65.86 MiB/s 00:37:21.314 Latency(us) 00:37:21.314 [2024-12-05T13:27:27.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:21.314 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:21.314 nvme0n1 : 1.01 16904.82 66.03 0.00 0.00 7555.20 2976.43 18677.76 00:37:21.314 [2024-12-05T13:27:27.614Z] =================================================================================================================== 00:37:21.314 [2024-12-05T13:27:27.614Z] Total : 16904.82 66.03 0.00 0.00 7555.20 2976.43 18677.76 00:37:21.314 { 00:37:21.314 "results": [ 00:37:21.314 { 00:37:21.314 "job": "nvme0n1", 00:37:21.314 "core_mask": "0x2", 00:37:21.314 "workload": "randrw", 00:37:21.314 "percentage": 50, 00:37:21.314 "status": "finished", 00:37:21.314 "queue_depth": 128, 00:37:21.314 "io_size": 4096, 00:37:21.314 "runtime": 1.005039, 00:37:21.314 "iops": 16904.816629006436, 00:37:21.314 "mibps": 66.03443995705639, 00:37:21.314 "io_failed": 0, 00:37:21.314 "io_timeout": 0, 00:37:21.314 "avg_latency_us": 7555.204595644497, 00:37:21.314 "min_latency_us": 2976.4266666666667, 00:37:21.314 "max_latency_us": 18677.76 00:37:21.314 } 00:37:21.314 ], 00:37:21.314 "core_count": 1 00:37:21.314 } 00:37:21.314 14:27:27 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:21.314 14:27:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:21.314 14:27:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:21.314 14:27:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:21.314 14:27:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.315 14:27:27 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:21.315 14:27:27 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:21.315 14:27:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:21.575 14:27:27 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:21.575 14:27:27 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:21.575 14:27:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.575 14:27:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:21.836 [2024-12-05 14:27:27.937068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:21.836 [2024-12-05 14:27:27.937224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126ac50 (107): Transport endpoint is not connected 00:37:21.836 [2024-12-05 14:27:27.938220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126ac50 (9): Bad file descriptor 00:37:21.836 [2024-12-05 14:27:27.939221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:21.836 [2024-12-05 14:27:27.939230] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:21.836 [2024-12-05 14:27:27.939236] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:21.836 [2024-12-05 14:27:27.939243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:21.836 request: 00:37:21.836 { 00:37:21.836 "name": "nvme0", 00:37:21.836 "trtype": "tcp", 00:37:21.836 "traddr": "127.0.0.1", 00:37:21.836 "adrfam": "ipv4", 00:37:21.836 "trsvcid": "4420", 00:37:21.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.836 "prchk_reftag": false, 00:37:21.836 "prchk_guard": false, 00:37:21.836 "hdgst": false, 00:37:21.836 "ddgst": false, 00:37:21.836 "psk": "key1", 00:37:21.836 "allow_unrecognized_csi": false, 00:37:21.836 "method": "bdev_nvme_attach_controller", 00:37:21.836 "req_id": 1 00:37:21.836 } 00:37:21.836 Got JSON-RPC error response 00:37:21.836 response: 00:37:21.836 { 00:37:21.836 "code": -5, 00:37:21.836 "message": "Input/output error" 00:37:21.836 } 00:37:21.836 14:27:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:21.836 14:27:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:21.836 14:27:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:21.836 14:27:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:21.836 14:27:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:21.836 14:27:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:21.836 14:27:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:21.836 14:27:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:21.836 14:27:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:21.836 14:27:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.096 14:27:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:22.096 14:27:28 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:22.096 14:27:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:22.096 14:27:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:22.096 14:27:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:22.096 14:27:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:22.096 14:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.096 14:27:28 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:22.096 14:27:28 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:22.096 14:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:22.356 14:27:28 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:22.356 14:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:22.356 14:27:28 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:22.356 14:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:22.356 14:27:28 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:22.616 14:27:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:22.616 14:27:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.l6XHvIgxQB 00:37:22.616 14:27:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:22.616 14:27:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:22.616 14:27:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:22.616 14:27:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:22.616 14:27:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:22.616 14:27:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:22.616 14:27:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:22.617 14:27:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:22.617 14:27:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:22.878 [2024-12-05 14:27:28.980944] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.l6XHvIgxQB': 0100660 00:37:22.878 [2024-12-05 14:27:28.980963] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:22.878 request: 00:37:22.878 { 00:37:22.878 "name": "key0", 00:37:22.878 "path": "/tmp/tmp.l6XHvIgxQB", 00:37:22.878 "method": "keyring_file_add_key", 00:37:22.878 "req_id": 1 00:37:22.878 } 00:37:22.878 Got JSON-RPC error response 00:37:22.878 response: 00:37:22.878 { 00:37:22.878 "code": -1, 00:37:22.878 "message": "Operation not permitted" 00:37:22.878 } 00:37:22.878 14:27:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:22.878 14:27:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:22.878 14:27:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:22.878 14:27:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:22.878 14:27:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.l6XHvIgxQB 00:37:22.878 14:27:29 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:22.878 14:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l6XHvIgxQB 00:37:22.878 14:27:29 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.l6XHvIgxQB 00:37:22.878 14:27:29 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:22.878 14:27:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:22.878 14:27:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.139 14:27:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.139 14:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:23.139 14:27:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.139 14:27:29 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:23.139 14:27:29 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.139 14:27:29 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.139 14:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.400 [2024-12-05 14:27:29.490243] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.l6XHvIgxQB': No such file or directory 00:37:23.400 [2024-12-05 14:27:29.490256] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:23.400 [2024-12-05 14:27:29.490269] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:23.400 [2024-12-05 14:27:29.490275] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:23.400 [2024-12-05 14:27:29.490281] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:23.400 [2024-12-05 14:27:29.490285] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:23.400 request: 00:37:23.400 { 00:37:23.400 "name": "nvme0", 00:37:23.400 "trtype": "tcp", 00:37:23.400 "traddr": "127.0.0.1", 00:37:23.400 "adrfam": "ipv4", 00:37:23.400 "trsvcid": "4420", 00:37:23.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.400 "prchk_reftag": false, 00:37:23.400 "prchk_guard": false, 00:37:23.400 "hdgst": false, 00:37:23.400 "ddgst": false, 00:37:23.400 "psk": "key0", 00:37:23.400 "allow_unrecognized_csi": false, 00:37:23.400 "method": "bdev_nvme_attach_controller", 00:37:23.400 "req_id": 1 00:37:23.400 } 00:37:23.400 Got JSON-RPC error response 00:37:23.400 response: 00:37:23.400 { 00:37:23.400 "code": -19, 00:37:23.400 "message": "No such device" 00:37:23.400 } 00:37:23.400 14:27:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:23.400 14:27:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.400 14:27:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.400 14:27:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.400 14:27:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:23.400 14:27:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OH4OJqgAMh 00:37:23.400 14:27:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:23.400 14:27:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:23.400 14:27:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:23.400 14:27:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:23.400 14:27:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:23.400 14:27:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:23.400 14:27:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:23.660 14:27:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OH4OJqgAMh 00:37:23.660 14:27:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OH4OJqgAMh 00:37:23.660 14:27:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.OH4OJqgAMh 00:37:23.660 14:27:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OH4OJqgAMh 00:37:23.660 14:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OH4OJqgAMh 00:37:23.660 14:27:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.660 14:27:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:23.921 nvme0n1 00:37:23.921 14:27:30 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:23.921 14:27:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:23.921 14:27:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:23.921 14:27:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:23.921 14:27:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:23.921 14:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.181 14:27:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:24.181 14:27:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:24.181 14:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:24.441 14:27:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:24.441 14:27:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.441 14:27:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:24.441 14:27:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.441 14:27:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:24.701 14:27:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:24.701 14:27:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:24.701 14:27:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:24.961 14:27:31 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:24.961 14:27:31 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:24.961 14:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:24.961 14:27:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:24.961 14:27:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OH4OJqgAMh 00:37:24.961 14:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OH4OJqgAMh 00:37:25.221 14:27:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lYPlzM8tw2 00:37:25.221 14:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lYPlzM8tw2 00:37:25.482 14:27:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:25.482 14:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:25.742 nvme0n1 00:37:25.742 14:27:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:25.742 14:27:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:26.003 14:27:32 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:26.003 "subsystems": [ 00:37:26.003 { 00:37:26.003 "subsystem": "keyring", 00:37:26.003 "config": [ 00:37:26.003 { 00:37:26.003 "method": "keyring_file_add_key", 00:37:26.003 "params": { 00:37:26.003 "name": "key0", 00:37:26.003 "path": "/tmp/tmp.OH4OJqgAMh" 00:37:26.003 } 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "method": "keyring_file_add_key", 00:37:26.003 "params": { 00:37:26.003 "name": "key1", 00:37:26.003 "path": "/tmp/tmp.lYPlzM8tw2" 00:37:26.003 } 00:37:26.003 } 00:37:26.003 ] 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "subsystem": "iobuf", 00:37:26.003 "config": [ 00:37:26.003 { 00:37:26.003 "method": "iobuf_set_options", 00:37:26.003 "params": { 00:37:26.003 "small_pool_count": 8192, 00:37:26.003 "large_pool_count": 1024, 00:37:26.003 "small_bufsize": 8192, 00:37:26.003 "large_bufsize": 135168, 00:37:26.003 "enable_numa": false 00:37:26.003 } 00:37:26.003 } 00:37:26.003 ] 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "subsystem": "sock", 00:37:26.003 "config": [ 00:37:26.003 { 00:37:26.003 "method": "sock_set_default_impl", 00:37:26.003 "params": { 00:37:26.003 "impl_name": "posix" 00:37:26.003 } 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "method": "sock_impl_set_options", 00:37:26.003 "params": { 00:37:26.003 "impl_name": "ssl", 00:37:26.003 "recv_buf_size": 4096, 00:37:26.003 "send_buf_size": 4096, 00:37:26.003 "enable_recv_pipe": true, 00:37:26.003 "enable_quickack": false, 00:37:26.003 "enable_placement_id": 0, 00:37:26.003 "enable_zerocopy_send_server": true, 00:37:26.003 "enable_zerocopy_send_client": false, 00:37:26.003 "zerocopy_threshold": 0, 00:37:26.003 "tls_version": 0, 00:37:26.003 "enable_ktls": false 00:37:26.003 } 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "method": "sock_impl_set_options", 00:37:26.003 "params": { 00:37:26.003 "impl_name": "posix", 00:37:26.003 "recv_buf_size": 2097152, 00:37:26.003 "send_buf_size": 2097152, 00:37:26.003 "enable_recv_pipe": true, 00:37:26.003 "enable_quickack": false, 00:37:26.003 "enable_placement_id": 0, 00:37:26.003 "enable_zerocopy_send_server": true, 00:37:26.003 "enable_zerocopy_send_client": false, 00:37:26.003 "zerocopy_threshold": 0, 00:37:26.003 "tls_version": 0, 00:37:26.003 "enable_ktls": false 00:37:26.003 } 00:37:26.003 } 00:37:26.003 ] 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "subsystem": "vmd", 00:37:26.003 "config": [] 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "subsystem": "accel", 00:37:26.003 "config": [ 00:37:26.003 { 00:37:26.003 "method": "accel_set_options", 00:37:26.003 "params": { 00:37:26.003 "small_cache_size": 128, 00:37:26.003 "large_cache_size": 16, 00:37:26.003 "task_count": 2048, 00:37:26.003 "sequence_count": 2048, 00:37:26.003 "buf_count": 2048 00:37:26.003 } 00:37:26.003 } 00:37:26.003 ] 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "subsystem": "bdev", 00:37:26.003 "config": [ 00:37:26.003 { 00:37:26.003 "method": "bdev_set_options", 00:37:26.003 "params": { 00:37:26.003 "bdev_io_pool_size": 65535, 00:37:26.003 "bdev_io_cache_size": 256, 00:37:26.003 "bdev_auto_examine": true, 00:37:26.003 "iobuf_small_cache_size": 128, 00:37:26.003 "iobuf_large_cache_size": 16 00:37:26.003 } 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "method": "bdev_raid_set_options", 00:37:26.003 "params": { 00:37:26.003 "process_window_size_kb": 1024, 00:37:26.003 "process_max_bandwidth_mb_sec": 0 00:37:26.003 } 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "method": "bdev_iscsi_set_options", 00:37:26.003 "params": { 00:37:26.003 "timeout_sec": 30 00:37:26.003 } 00:37:26.003 }, 00:37:26.003 { 00:37:26.003 "method": "bdev_nvme_set_options", 00:37:26.003 "params": { 00:37:26.003 "action_on_timeout": "none", 00:37:26.003 "timeout_us": 0, 00:37:26.003 "timeout_admin_us": 0, 00:37:26.003 "keep_alive_timeout_ms": 10000, 00:37:26.003 "arbitration_burst": 0, 00:37:26.003 "low_priority_weight": 0, 00:37:26.003 "medium_priority_weight": 0, 00:37:26.003 "high_priority_weight": 0, 00:37:26.003 "nvme_adminq_poll_period_us": 10000, 00:37:26.003 "nvme_ioq_poll_period_us": 0, 00:37:26.003 "io_queue_requests": 512, 00:37:26.003 "delay_cmd_submit": true, 00:37:26.003 "transport_retry_count": 4, 00:37:26.003 "bdev_retry_count": 3, 00:37:26.003 "transport_ack_timeout": 0, 00:37:26.003 "ctrlr_loss_timeout_sec": 0, 00:37:26.003 "reconnect_delay_sec": 0, 00:37:26.003 "fast_io_fail_timeout_sec": 0, 00:37:26.003 "disable_auto_failback": false, 00:37:26.003 "generate_uuids": false, 00:37:26.003 "transport_tos": 0, 00:37:26.003 "nvme_error_stat": false, 00:37:26.003 "rdma_srq_size": 0, 00:37:26.003 "io_path_stat": false, 00:37:26.003 "allow_accel_sequence": false, 00:37:26.003 "rdma_max_cq_size": 0, 00:37:26.003 "rdma_cm_event_timeout_ms": 0, 00:37:26.003 "dhchap_digests": [ 00:37:26.003 "sha256", 00:37:26.003 "sha384", 00:37:26.003 "sha512" 00:37:26.003 ], 00:37:26.004 "dhchap_dhgroups": [ 00:37:26.004 "null", 00:37:26.004 "ffdhe2048", 00:37:26.004 "ffdhe3072", 00:37:26.004 "ffdhe4096", 00:37:26.004 "ffdhe6144", 00:37:26.004 "ffdhe8192" 00:37:26.004 ] 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "bdev_nvme_attach_controller", 00:37:26.004 "params": { 00:37:26.004 "name": "nvme0", 00:37:26.004 "trtype": "TCP", 00:37:26.004 "adrfam": "IPv4", 00:37:26.004 "traddr": "127.0.0.1", 00:37:26.004 "trsvcid": "4420", 00:37:26.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.004 "prchk_reftag": false, 00:37:26.004 "prchk_guard": false, 00:37:26.004 "ctrlr_loss_timeout_sec": 0, 00:37:26.004 "reconnect_delay_sec": 0, 00:37:26.004 "fast_io_fail_timeout_sec": 0, 00:37:26.004 "psk": "key0", 00:37:26.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.004 "hdgst": false, 00:37:26.004 "ddgst": false, 00:37:26.004 "multipath": "multipath" 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "bdev_nvme_set_hotplug", 00:37:26.004 "params": { 00:37:26.004 "period_us": 100000, 00:37:26.004 "enable": false 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "bdev_wait_for_examine" 00:37:26.004 } 00:37:26.004 ] 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "subsystem": "nbd", 00:37:26.004 "config": [] 00:37:26.004 } 00:37:26.004 ] 00:37:26.004 }' 00:37:26.004 14:27:32 keyring_file -- keyring/file.sh@115 -- # killprocess 3057785 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3057785 ']' 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3057785 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057785 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057785' 00:37:26.004 killing process with pid 3057785 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@973 -- # kill 3057785 00:37:26.004 Received shutdown signal, test time was about 1.000000 seconds 00:37:26.004 00:37:26.004 Latency(us) 00:37:26.004 [2024-12-05T13:27:32.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.004 [2024-12-05T13:27:32.304Z] =================================================================================================================== 00:37:26.004 [2024-12-05T13:27:32.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@978 -- # wait 3057785 00:37:26.004 14:27:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=3059555 00:37:26.004 14:27:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3059555 /var/tmp/bperf.sock 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3059555 ']' 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:26.004 14:27:32 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:26.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:26.004 14:27:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:26.004 14:27:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:26.004 "subsystems": [ 00:37:26.004 { 00:37:26.004 "subsystem": "keyring", 00:37:26.004 "config": [ 00:37:26.004 { 00:37:26.004 "method": "keyring_file_add_key", 00:37:26.004 "params": { 00:37:26.004 "name": "key0", 00:37:26.004 "path": "/tmp/tmp.OH4OJqgAMh" 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "keyring_file_add_key", 00:37:26.004 "params": { 00:37:26.004 "name": "key1", 00:37:26.004 "path": "/tmp/tmp.lYPlzM8tw2" 00:37:26.004 } 00:37:26.004 } 00:37:26.004 ] 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "subsystem": "iobuf", 00:37:26.004 "config": [ 00:37:26.004 { 00:37:26.004 "method": "iobuf_set_options", 00:37:26.004 "params": { 00:37:26.004 "small_pool_count": 8192, 00:37:26.004 "large_pool_count": 1024, 00:37:26.004 "small_bufsize": 8192, 00:37:26.004 "large_bufsize": 135168, 00:37:26.004 "enable_numa": false 00:37:26.004 } 00:37:26.004 } 00:37:26.004 ] 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "subsystem": "sock", 00:37:26.004 "config": [ 00:37:26.004 { 00:37:26.004 "method": "sock_set_default_impl", 00:37:26.004 "params": { 00:37:26.004 "impl_name": "posix" 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "sock_impl_set_options", 00:37:26.004 "params": { 00:37:26.004 "impl_name": "ssl", 00:37:26.004 "recv_buf_size": 4096, 00:37:26.004 "send_buf_size": 4096, 00:37:26.004 "enable_recv_pipe": true, 00:37:26.004 "enable_quickack": false, 00:37:26.004 "enable_placement_id": 0, 00:37:26.004 "enable_zerocopy_send_server": true, 00:37:26.004 "enable_zerocopy_send_client": false, 00:37:26.004 "zerocopy_threshold": 0, 00:37:26.004 "tls_version": 0, 00:37:26.004 "enable_ktls": false 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "sock_impl_set_options", 00:37:26.004 "params": { 00:37:26.004 "impl_name": "posix", 00:37:26.004 "recv_buf_size": 2097152, 00:37:26.004 "send_buf_size": 2097152, 00:37:26.004 "enable_recv_pipe": true, 00:37:26.004 "enable_quickack": false, 00:37:26.004 "enable_placement_id": 0, 00:37:26.004 "enable_zerocopy_send_server": true, 00:37:26.004 "enable_zerocopy_send_client": false, 00:37:26.004 "zerocopy_threshold": 0, 00:37:26.004 "tls_version": 0, 00:37:26.004 "enable_ktls": false 00:37:26.004 } 00:37:26.004 } 00:37:26.004 ] 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "subsystem": "vmd", 00:37:26.004 "config": [] 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "subsystem": "accel", 00:37:26.004 "config": [ 00:37:26.004 { 00:37:26.004 "method": "accel_set_options", 00:37:26.004 "params": { 00:37:26.004 "small_cache_size": 128, 00:37:26.004 "large_cache_size": 16, 00:37:26.004 "task_count": 2048, 00:37:26.004 "sequence_count": 2048, 00:37:26.004 "buf_count": 2048 00:37:26.004 } 00:37:26.004 } 00:37:26.004 ] 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "subsystem": "bdev", 00:37:26.004 "config": [ 00:37:26.004 { 00:37:26.004 "method": "bdev_set_options", 00:37:26.004 "params": { 00:37:26.004 "bdev_io_pool_size": 65535, 00:37:26.004 "bdev_io_cache_size": 256, 00:37:26.004 "bdev_auto_examine": true, 00:37:26.004 "iobuf_small_cache_size": 128, 00:37:26.004 "iobuf_large_cache_size": 16 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "bdev_raid_set_options", 00:37:26.004 "params": { 00:37:26.004 "process_window_size_kb": 1024, 00:37:26.004 "process_max_bandwidth_mb_sec": 0 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "bdev_iscsi_set_options", 00:37:26.004 "params": { 00:37:26.004 "timeout_sec": 30 00:37:26.004 } 00:37:26.004 }, 00:37:26.004 { 00:37:26.004 "method": "bdev_nvme_set_options", 00:37:26.004 "params": { 00:37:26.004 "action_on_timeout": "none", 00:37:26.004 "timeout_us": 0, 00:37:26.004 "timeout_admin_us": 0, 00:37:26.004 "keep_alive_timeout_ms": 10000, 00:37:26.004 "arbitration_burst": 0, 00:37:26.004 "low_priority_weight": 0, 00:37:26.004 "medium_priority_weight": 0, 00:37:26.005 "high_priority_weight": 0, 00:37:26.005 "nvme_adminq_poll_period_us": 10000, 00:37:26.005 "nvme_ioq_poll_period_us": 0, 00:37:26.005 "io_queue_requests": 512, 00:37:26.005 "delay_cmd_submit": true, 00:37:26.005 "transport_retry_count": 4, 00:37:26.005 "bdev_retry_count": 3, 00:37:26.005 "transport_ack_timeout": 0, 00:37:26.005 "ctrlr_loss_timeout_sec": 0, 00:37:26.005 "reconnect_delay_sec": 0, 00:37:26.005 "fast_io_fail_timeout_sec": 0, 00:37:26.005 "disable_auto_failback": false, 00:37:26.005 "generate_uuids": false, 00:37:26.005 "transport_tos": 0, 00:37:26.005 "nvme_error_stat": false, 00:37:26.005 "rdma_srq_size": 0, 00:37:26.005 "io_path_stat": false, 00:37:26.005 "allow_accel_sequence": false, 00:37:26.005 "rdma_max_cq_size": 0, 00:37:26.005 "rdma_cm_event_timeout_ms": 0, 00:37:26.005 "dhchap_digests": [ 00:37:26.005 "sha256", 00:37:26.005 "sha384", 00:37:26.005 "sha512" 00:37:26.005 ], 00:37:26.005 "dhchap_dhgroups": [ 00:37:26.005 "null", 00:37:26.005 "ffdhe2048", 00:37:26.005 "ffdhe3072", 00:37:26.005 "ffdhe4096", 00:37:26.005 "ffdhe6144", 00:37:26.005 "ffdhe8192" 00:37:26.005 ] 00:37:26.005 } 00:37:26.005 }, 00:37:26.005 { 00:37:26.005 "method": "bdev_nvme_attach_controller", 00:37:26.005 "params": { 00:37:26.005 "name": "nvme0", 00:37:26.005 "trtype": "TCP", 00:37:26.005 "adrfam": "IPv4", 00:37:26.005 "traddr": "127.0.0.1", 00:37:26.005 "trsvcid": "4420", 00:37:26.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:26.005 "prchk_reftag": false, 00:37:26.005 "prchk_guard": false, 00:37:26.005 "ctrlr_loss_timeout_sec": 0, 00:37:26.005 "reconnect_delay_sec": 0, 00:37:26.005 "fast_io_fail_timeout_sec": 0, 00:37:26.005 "psk": "key0", 00:37:26.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:26.005 "hdgst": false, 00:37:26.005 "ddgst": false, 00:37:26.005 "multipath": "multipath" 00:37:26.005 } 00:37:26.005 }, 00:37:26.005 { 00:37:26.005 "method": "bdev_nvme_set_hotplug", 00:37:26.005 "params": { 00:37:26.005 "period_us": 100000, 00:37:26.005 "enable": false 00:37:26.005 } 00:37:26.005 }, 00:37:26.005 { 00:37:26.005 "method": "bdev_wait_for_examine" 00:37:26.005 } 00:37:26.005 ] 00:37:26.005 }, 00:37:26.005 { 00:37:26.005 "subsystem": "nbd", 00:37:26.005 "config": [] 00:37:26.005 } 00:37:26.005 ] 00:37:26.005 }' 00:37:26.005 [2024-12-05 14:27:32.295018] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:37:26.005 [2024-12-05 14:27:32.295073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059555 ] 00:37:26.265 [2024-12-05 14:27:32.376284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.265 [2024-12-05 14:27:32.405560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:26.265 [2024-12-05 14:27:32.549776] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:26.834 14:27:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:26.834 14:27:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:26.834 14:27:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:26.834 14:27:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:26.834 14:27:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.094 14:27:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:27.094 14:27:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:27.094 14:27:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:27.094 14:27:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.094 14:27:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.094 14:27:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.094 14:27:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:27.355 14:27:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:27.355 14:27:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:27.355 14:27:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:27.355 14:27:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:27.355 14:27:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:27.355 14:27:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:27.355 14:27:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:27.355 14:27:33 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:27.355 14:27:33 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:27.355 14:27:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:27.355 14:27:33 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:27.615 14:27:33 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:27.616 14:27:33 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:27.616 14:27:33 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OH4OJqgAMh /tmp/tmp.lYPlzM8tw2 00:37:27.616 14:27:33 keyring_file -- keyring/file.sh@20 -- # killprocess 3059555 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3059555 ']' 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3059555 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059555 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059555' 00:37:27.616 killing process with pid 3059555 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@973 -- # kill 3059555 00:37:27.616 Received shutdown signal, test time was about 1.000000 seconds 00:37:27.616 00:37:27.616 Latency(us) 00:37:27.616 [2024-12-05T13:27:33.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.616 [2024-12-05T13:27:33.916Z] =================================================================================================================== 00:37:27.616 [2024-12-05T13:27:33.916Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:27.616 14:27:33 keyring_file -- common/autotest_common.sh@978 -- # wait 3059555 00:37:27.877 14:27:33 keyring_file -- keyring/file.sh@21 -- # killprocess 3057505 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3057505 ']' 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3057505 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3057505 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3057505' 00:37:27.877 killing process with pid 3057505 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@973 -- # kill 3057505 00:37:27.877 14:27:33 keyring_file -- common/autotest_common.sh@978 -- # wait 3057505 00:37:28.138 00:37:28.138 real 0m12.065s 00:37:28.138 user 0m29.000s 00:37:28.138 sys 0m2.831s 00:37:28.138 14:27:34 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:28.138 14:27:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:28.138 ************************************ 00:37:28.138 END TEST keyring_file 00:37:28.138 ************************************ 00:37:28.138 14:27:34 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:28.138 14:27:34 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:28.138 14:27:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:28.138 14:27:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:28.138 14:27:34 -- common/autotest_common.sh@10 -- # set +x 00:37:28.138 ************************************ 00:37:28.138 START TEST keyring_linux 00:37:28.138 ************************************ 00:37:28.138 14:27:34 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:28.138 Joined session keyring: 679255433 00:37:28.138 * Looking for test storage... 00:37:28.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:28.138 14:27:34 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:28.138 14:27:34 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:37:28.138 14:27:34 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:28.400 14:27:34 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:28.401 14:27:34 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:28.401 14:27:34 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.401 --rc genhtml_branch_coverage=1 00:37:28.401 --rc genhtml_function_coverage=1 00:37:28.401 --rc genhtml_legend=1 00:37:28.401 --rc geninfo_all_blocks=1 00:37:28.401 --rc geninfo_unexecuted_blocks=1 00:37:28.401 00:37:28.401 ' 00:37:28.401 14:27:34 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.401 --rc genhtml_branch_coverage=1 00:37:28.401 --rc genhtml_function_coverage=1 00:37:28.401 --rc genhtml_legend=1 00:37:28.401 --rc geninfo_all_blocks=1 00:37:28.401 --rc geninfo_unexecuted_blocks=1 00:37:28.401 00:37:28.401 ' 00:37:28.401 14:27:34 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.401 --rc genhtml_branch_coverage=1 00:37:28.401 --rc genhtml_function_coverage=1 00:37:28.401 --rc genhtml_legend=1 00:37:28.401 --rc geninfo_all_blocks=1 00:37:28.401 --rc geninfo_unexecuted_blocks=1 00:37:28.401 00:37:28.401 ' 00:37:28.401 14:27:34 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:28.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:28.401 --rc genhtml_branch_coverage=1 00:37:28.401 --rc genhtml_function_coverage=1 00:37:28.401 --rc genhtml_legend=1 00:37:28.401 --rc geninfo_all_blocks=1 00:37:28.401 --rc geninfo_unexecuted_blocks=1 00:37:28.401 00:37:28.401 ' 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:28.401 14:27:34 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:28.401 14:27:34 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:28.401 14:27:34 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.401 14:27:34 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.401 14:27:34 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.401 14:27:34 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:28.401 14:27:34 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:28.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:28.401 14:27:34 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:28.401 14:27:34 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:28.401 14:27:34 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:28.401 14:27:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:28.401 14:27:34 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:28.401 14:27:34 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:28.401 14:27:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:28.402 /tmp/:spdk-test:key0 00:37:28.402 14:27:34 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:28.402 14:27:34 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:28.402 14:27:34 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:28.402 /tmp/:spdk-test:key1 00:37:28.402 14:27:34 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3060024 00:37:28.402 14:27:34 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3060024 00:37:28.402 14:27:34 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:28.402 14:27:34 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3060024 ']' 00:37:28.402 14:27:34 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:28.402 14:27:34 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:28.402 14:27:34 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:28.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:28.402 14:27:34 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:28.402 14:27:34 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:28.402 [2024-12-05 14:27:34.641373] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:37:28.402 [2024-12-05 14:27:34.641433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060024 ] 00:37:28.662 [2024-12-05 14:27:34.728123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.662 [2024-12-05 14:27:34.759680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:29.233 14:27:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:29.233 [2024-12-05 14:27:35.447846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:29.233 null0 00:37:29.233 [2024-12-05 14:27:35.479900] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:29.233 [2024-12-05 14:27:35.480250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.233 14:27:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:29.233 221455135 00:37:29.233 14:27:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:29.233 898118450 00:37:29.233 14:27:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3060168 00:37:29.233 14:27:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3060168 /var/tmp/bperf.sock 00:37:29.233 14:27:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3060168 ']' 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:29.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:29.233 14:27:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:29.492 [2024-12-05 14:27:35.557122] Starting SPDK v25.01-pre git sha1 2bcaf03f7 / DPDK 24.03.0 initialization... 00:37:29.493 [2024-12-05 14:27:35.557169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060168 ] 00:37:29.493 [2024-12-05 14:27:35.630571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.493 [2024-12-05 14:27:35.671229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.493 14:27:35 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:29.493 14:27:35 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:29.493 14:27:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:29.493 14:27:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:29.754 14:27:35 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:29.754 14:27:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:30.015 14:27:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:30.015 14:27:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:30.015 [2024-12-05 14:27:36.238984] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:30.015 nvme0n1 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:30.276 14:27:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:30.276 14:27:36 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:30.276 14:27:36 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:30.276 14:27:36 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:30.276 14:27:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@25 -- # sn=221455135 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@26 -- # [[ 221455135 == \2\2\1\4\5\5\1\3\5 ]] 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 221455135 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:30.537 14:27:36 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:30.537 Running I/O for 1 seconds... 00:37:31.737 24585.00 IOPS, 96.04 MiB/s 00:37:31.737 Latency(us) 00:37:31.737 [2024-12-05T13:27:38.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.737 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:31.737 nvme0n1 : 1.01 24584.64 96.03 0.00 0.00 5191.22 2962.77 7318.19 00:37:31.737 [2024-12-05T13:27:38.037Z] =================================================================================================================== 00:37:31.737 [2024-12-05T13:27:38.037Z] Total : 24584.64 96.03 0.00 0.00 5191.22 2962.77 7318.19 00:37:31.737 { 00:37:31.737 "results": [ 00:37:31.737 { 00:37:31.737 "job": "nvme0n1", 00:37:31.737 "core_mask": "0x2", 00:37:31.737 "workload": "randread", 00:37:31.737 "status": "finished", 00:37:31.738 "queue_depth": 128, 00:37:31.738 "io_size": 4096, 00:37:31.738 "runtime": 1.005221, 00:37:31.738 "iops": 24584.643575890277, 00:37:31.738 "mibps": 96.0337639683214, 00:37:31.738 "io_failed": 0, 00:37:31.738 "io_timeout": 0, 00:37:31.738 "avg_latency_us": 5191.215506008983, 00:37:31.738 "min_latency_us": 2962.7733333333335, 00:37:31.738 "max_latency_us": 7318.1866666666665 00:37:31.738 } 00:37:31.738 ], 00:37:31.738 "core_count": 1 00:37:31.738 } 00:37:31.738 14:27:37 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:31.738 14:27:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:31.738 14:27:37 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:31.738 14:27:37 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:31.738 14:27:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:31.738 14:27:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:31.738 14:27:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:31.738 14:27:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:31.998 14:27:38 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:31.998 14:27:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:31.998 14:27:38 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:31.998 14:27:38 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:31.998 14:27:38 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:31.998 14:27:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:32.259 [2024-12-05 14:27:38.308968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:32.259 [2024-12-05 14:27:38.309731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4c9e0 (107): Transport endpoint is not connected 00:37:32.259 [2024-12-05 14:27:38.310727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4c9e0 (9): Bad file descriptor 00:37:32.259 [2024-12-05 14:27:38.311729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:32.259 [2024-12-05 14:27:38.311741] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:32.259 [2024-12-05 14:27:38.311747] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:32.259 [2024-12-05 14:27:38.311753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:32.259 request: 00:37:32.259 { 00:37:32.259 "name": "nvme0", 00:37:32.259 "trtype": "tcp", 00:37:32.259 "traddr": "127.0.0.1", 00:37:32.259 "adrfam": "ipv4", 00:37:32.259 "trsvcid": "4420", 00:37:32.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:32.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:32.259 "prchk_reftag": false, 00:37:32.259 "prchk_guard": false, 00:37:32.259 "hdgst": false, 00:37:32.259 "ddgst": false, 00:37:32.259 "psk": ":spdk-test:key1", 00:37:32.259 "allow_unrecognized_csi": false, 00:37:32.259 "method": "bdev_nvme_attach_controller", 00:37:32.259 "req_id": 1 00:37:32.259 } 00:37:32.259 Got JSON-RPC error response 00:37:32.259 response: 00:37:32.259 { 00:37:32.259 "code": -5, 00:37:32.259 "message": "Input/output error" 00:37:32.259 } 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@33 -- # sn=221455135 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 221455135 00:37:32.259 1 links removed 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@33 -- # sn=898118450 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 898118450 00:37:32.259 1 links removed 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3060168 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3060168 ']' 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3060168 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060168 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060168' 00:37:32.259 killing process with pid 3060168 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 3060168 00:37:32.259 Received shutdown signal, test time was about 1.000000 seconds 00:37:32.259 00:37:32.259 Latency(us) 00:37:32.259 [2024-12-05T13:27:38.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.259 [2024-12-05T13:27:38.559Z] =================================================================================================================== 00:37:32.259 [2024-12-05T13:27:38.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 3060168 00:37:32.259 14:27:38 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3060024 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3060024 ']' 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3060024 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.259 14:27:38 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3060024 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3060024' 00:37:32.520 killing process with pid 3060024 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@973 -- # kill 3060024 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@978 -- # wait 3060024 00:37:32.520 00:37:32.520 real 0m4.504s 00:37:32.520 user 0m8.204s 00:37:32.520 sys 0m1.357s 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:32.520 14:27:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:32.520 ************************************ 00:37:32.520 END TEST keyring_linux 00:37:32.520 ************************************ 00:37:32.520 14:27:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:32.520 14:27:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:32.520 14:27:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:32.520 14:27:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:32.520 14:27:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:32.520 14:27:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:32.520 14:27:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:32.520 14:27:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:32.520 14:27:38 -- common/autotest_common.sh@10 -- # set +x 00:37:32.520 14:27:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:32.520 14:27:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:32.520 14:27:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:32.520 14:27:38 -- common/autotest_common.sh@10 -- # set +x 00:37:40.656 INFO: APP EXITING 00:37:40.656 INFO: killing all VMs 00:37:40.656 INFO: killing vhost app 00:37:40.656 INFO: EXIT DONE 00:37:44.072 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:65:00.0 (144d a80a): Already using the nvme driver 00:37:44.072 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:37:44.072 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:37:47.376 Cleaning 00:37:47.376 Removing: /var/run/dpdk/spdk0/config 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:47.376 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:47.376 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:47.376 Removing: /var/run/dpdk/spdk1/config 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:47.376 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:47.376 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:47.376 Removing: /var/run/dpdk/spdk2/config 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:47.376 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:47.376 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:47.376 Removing: /var/run/dpdk/spdk3/config 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:47.376 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:47.638 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:47.638 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:47.638 Removing: /var/run/dpdk/spdk4/config 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:47.638 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:47.638 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:47.638 Removing: /dev/shm/bdev_svc_trace.1 00:37:47.638 Removing: /dev/shm/nvmf_trace.0 00:37:47.638 Removing: /dev/shm/spdk_tgt_trace.pid2485018 00:37:47.639 Removing: /var/run/dpdk/spdk0 00:37:47.639 Removing: /var/run/dpdk/spdk1 00:37:47.639 Removing: /var/run/dpdk/spdk2 00:37:47.639 Removing: /var/run/dpdk/spdk3 00:37:47.639 Removing: /var/run/dpdk/spdk4 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2483349 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2485018 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2485670 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2486735 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2487055 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2488163 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2488450 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2488771 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2489849 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2490517 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2490918 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2491314 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2491726 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2492130 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2492284 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2492520 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2492993 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2494244 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2498131 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2498493 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2498860 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2498880 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2499421 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2499582 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2499958 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2500214 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2500476 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2500666 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2500928 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2501047 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2501535 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2501844 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2502249 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2506895 00:37:47.639 Removing: /var/run/dpdk/spdk_pid2512153 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2524257 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2524943 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2530300 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2530690 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2535758 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2542835 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2546762 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2559360 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2570191 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2572293 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2573451 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2594283 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2599193 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2655518 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2662484 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2669513 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2677417 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2677419 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2678424 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2679427 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2680435 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2681101 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2681111 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2681442 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2681452 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2681524 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2682593 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2683601 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2684670 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2685286 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2685410 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2685650 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2686992 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2688340 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2697999 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2731797 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2737200 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2739199 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2741535 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2741719 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2742019 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2742432 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2743541 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2745882 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2746646 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2747345 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2750057 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2750718 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2751476 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2756528 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2763209 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2763211 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2763213 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2767919 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2777987 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2782695 00:37:47.901 Removing: /var/run/dpdk/spdk_pid2790126 00:37:48.163 Removing: /var/run/dpdk/spdk_pid2791788 00:37:48.163 Removing: /var/run/dpdk/spdk_pid2793822 00:37:48.163 Removing: /var/run/dpdk/spdk_pid2795664 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2801131 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2806505 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2811545 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2820640 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2820645 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2825702 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2826036 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2826363 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2826725 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2826835 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2832414 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2832964 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2838423 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2841673 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2848277 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2855621 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2865747 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2874371 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2874402 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2896668 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2897350 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2898051 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2898984 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2900006 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2901245 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2902038 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2902726 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2907799 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2908121 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2915475 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2915639 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2922180 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2927350 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2938901 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2939614 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2944724 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2945119 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2950268 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2957459 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2960528 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2972683 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2983328 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2985197 00:37:48.164 Removing: /var/run/dpdk/spdk_pid2986308 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3006202 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3010917 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3014122 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3021886 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3021891 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3027783 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3029998 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3032500 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3033688 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3036202 00:37:48.164 Removing: /var/run/dpdk/spdk_pid3037458 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3047349 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3048011 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3048630 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3051338 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3052080 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3052659 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3057505 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3057785 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3059555 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3060024 00:37:48.425 Removing: /var/run/dpdk/spdk_pid3060168 00:37:48.425 Clean 00:37:48.425 14:27:54 -- common/autotest_common.sh@1453 -- # return 0 00:37:48.425 14:27:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:48.425 14:27:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:48.425 14:27:54 -- common/autotest_common.sh@10 -- # set +x 00:37:48.425 14:27:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:48.425 14:27:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:48.425 14:27:54 -- common/autotest_common.sh@10 -- # set +x 00:37:48.425 14:27:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:48.425 14:27:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:48.425 14:27:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:48.425 14:27:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:48.425 14:27:54 -- spdk/autotest.sh@398 -- # hostname 00:37:48.425 14:27:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:48.686 geninfo: WARNING: invalid characters removed from testname! 00:38:15.265 14:28:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:17.175 14:28:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:19.084 14:28:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:20.994 14:28:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:22.904 14:28:28 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:24.287 14:28:30 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:26.196 14:28:32 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:26.196 14:28:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:26.196 14:28:32 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:26.196 14:28:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:26.196 14:28:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:26.196 14:28:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:26.196 + [[ -n 2398575 ]] 00:38:26.196 + sudo kill 2398575 00:38:26.206 [Pipeline] } 00:38:26.222 [Pipeline] // stage 00:38:26.228 [Pipeline] } 00:38:26.245 [Pipeline] // timeout 00:38:26.251 [Pipeline] } 00:38:26.268 [Pipeline] // catchError 00:38:26.273 [Pipeline] } 00:38:26.288 [Pipeline] // wrap 00:38:26.294 [Pipeline] } 00:38:26.308 [Pipeline] // catchError 00:38:26.317 [Pipeline] stage 00:38:26.319 [Pipeline] { (Epilogue) 00:38:26.333 [Pipeline] catchError 00:38:26.335 [Pipeline] { 00:38:26.352 [Pipeline] echo 00:38:26.354 Cleanup processes 00:38:26.363 [Pipeline] sh 00:38:26.656 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:26.656 3073002 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:26.672 [Pipeline] sh 00:38:26.961 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:26.961 ++ grep -v 'sudo pgrep' 00:38:26.961 ++ awk '{print $1}' 00:38:26.961 + sudo kill -9 00:38:26.961 + true 00:38:26.975 [Pipeline] sh 00:38:27.261 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:39.504 [Pipeline] sh 00:38:39.793 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:39.793 Artifacts sizes are good 00:38:39.808 [Pipeline] archiveArtifacts 00:38:39.816 Archiving artifacts 00:38:39.959 [Pipeline] sh 00:38:40.247 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:40.262 [Pipeline] cleanWs 00:38:40.314 [WS-CLEANUP] Deleting project workspace... 00:38:40.314 [WS-CLEANUP] Deferred wipeout is used... 00:38:40.370 [WS-CLEANUP] done 00:38:40.372 [Pipeline] } 00:38:40.390 [Pipeline] // catchError 00:38:40.402 [Pipeline] sh 00:38:40.692 + logger -p user.info -t JENKINS-CI 00:38:40.703 [Pipeline] } 00:38:40.715 [Pipeline] // stage 00:38:40.721 [Pipeline] } 00:38:40.737 [Pipeline] // node 00:38:40.742 [Pipeline] End of Pipeline 00:38:40.850 Finished: SUCCESS